Lighting in C++ using a glsl - c++

I am currently using this glsl file to handle lighting for a 3d object that I am trying to display. I am not sure what values I need to put in for light_position_world, Ls, Ld, La, Ks, Kd, Ka, Ia and fragment_color. The scene I am trying to illuminate is centered at (427, 385, 89) roughly. I dont need it to be perfect but I need some values that will let me see my design on screen so that I can manipulate them and learn how this all works. Any additional tips or explanation would be much appreciated. Thanks!
#version 410
in vec3 position_eye, normal_eye;
uniform mat4 view_mat;
// fixed point light properties
vec3 light_position_world = vec3 (427.029, 385.888, 0);
vec3 Ls = vec3 (1.0f, 0.0f, 0.0f);
vec3 Ld = vec3 (1.0f, 0.0f, 0.0f);
vec3 La = vec3 (1.0f, 0.2f, 0.0f);
// surface reflectance
vec3 Ks = vec3 (1.0f, 1.0f, 1.0f);
vec3 Kd = vec3 (1.0f, 0.8f, 0.72f);
vec3 Ka = vec3 (1.0f, 1.0f, 1.0f);
float specular_exponent = 10.0; // specular 'power'
out vec4 fragment_colour; // final colour of surface
void main () {
// ambient intensity
vec3 Ia = vec3 (0, 0, 0);
// diffuse intensity
// raise light position to eye space
vec3 light_position_eye = light_position_world; //vec3 (view_mat * vec4 (light_position_world, 1.0));
vec3 distance_to_light_eye = light_position_eye - position_eye;
vec3 direction_to_light_eye = normalize (distance_to_light_eye);
float dot_prod = dot (direction_to_light_eye, normal_eye);
dot_prod = max (dot_prod, 0.0);
vec3 Id = Ld * Kd * dot_prod; // final diffuse intensity
// specular intensity
vec3 surface_to_viewer_eye = normalize (-position_eye);
// blinn
vec3 half_way_eye = normalize (surface_to_viewer_eye + direction_to_light_eye);
float dot_prod_specular = max (dot (half_way_eye, normal_eye), 0.0);
float specular_factor = pow (dot_prod_specular, specular_exponent);
vec3 Is = Ls * Ks * specular_factor; // final specular intensity
// final colour
fragment_colour = vec4 (255, 25, 25, 0);
}

There are a few problems with your code.
1) Assuming, light_position_world is the position of the light in world space, the light is below your scene. So the scene won't be illuminated from above.
2) Assuming, *_eye means a coordinate in eye space and *_world is a coordinate in world space, you may not interchange between those spaces by simply assigning vectors. You have to use a view matrix and it's inverse view matrix to go from world to eye space and from eye space to world space respectivly.
3) The output color of the shader, fragment_colour, is always set to a dark red-ish color. So the compiler will just leave out all the lighting calculations. You have to use something like this: fragment_colour = Ia + Id * material + Is * material, where material is the color of your material - e.g. gray for metal.
It seems like you don't understand the underlying basics. I suggest you read a few articles or tutorials about lighting and transformation/maths in OpenGL.
If you have consumed a fair bit of literature, experiment with your code. Try out, what different calculations do and how they influence the end product. You won't get 100% physically accurate lighting anyways, so there's nothing to go wrong.

Related

How to apply Texture-Mapping to a Maya Object using OpenGL?

I am currently learning how to map 2d textures to 3d objects using GLSL. I have a main.cpp, fragment shader, and vertex shader to achieve this as well as a Sphere.obj I made using Maya and some PNG images.
I just created a basic sphere poly model in Maya then exported it as a ".obj".
My fragment shader code is listed below for reference:
#version 410
// Inputs from application.
// Generally, "in" like the eye and normal vectors for things that change frequently,
// and "uniform" for things that change less often (think scene versus vertices).
in vec3 position_eye, normal_eye;
uniform mat4 view_mat;
// This light setup would usually be passed in from the application.
vec3 light_position_world = vec3 (10.0, 25.0, 10.0);
vec3 Ls = vec3 (1.0, 1.0, 1.0); // neutral, full specular color of light
vec3 Ld = vec3 (0.8, 0.8, 0.8); // neutral, lessened diffuse light color of light
vec3 La = vec3 (0.12, 0.12, 0.12); // ambient color of light - just a bit more than dk gray bg
// Surface reflectance properties for Phong or Blinn-Phong shading models below.
vec3 Ks = vec3 (1.0, 1.0, 1.0); // fully reflect specular light
vec3 Kd = vec3 (0.32, 0.18, 0.5); // purple diffuse surface reflectance
vec3 Ka = vec3 (1.0, 1.0, 1.0); // fully reflect ambient light
float specular_exponent = 400.0; // specular 'power' -- controls "roll-off"
// These come from the VAO for texture coordinates.
in vec2 texture_coords;
// And from the uniform outputs for the textures setup in main.cpp.
uniform sampler2D texture00;
uniform sampler2D texture01;
out vec4 fragment_color; // color of surface to draw
void main ()
{
// Ambient intensity
vec3 Ia = La * Ka;
// These next few lines sample the current texture coord (s, t) in texture00 and 01 and mix.
vec4 texel_a = texture (texture00, fract(texture_coords*2.0));
vec4 texel_b = texture (texture01, fract(texture_coords*2.0));
//vec4 mixed = mix (texel_a, texel_b, texture_coords.x);
vec4 mixed = mix (texel_a, texel_b, texture_coords.x);
Kd.x = mixed.x;
Kd.y = mixed.y;
Kd.z = mixed.z;
// Transform light position to view space.
// Vectors here are appended with _eye as a reminder once in view space versus world space.
// "Eye" is used instead of "camera" since reflectance models often phrased that way.
vec3 light_position_eye = vec3 (view_mat * vec4 (light_position_world, 1.0));
vec3 distance_to_light_eye = light_position_eye - position_eye;
vec3 direction_to_light_eye = normalize (distance_to_light_eye);
// Diffuse intensity
float dot_prod = dot (direction_to_light_eye, normal_eye);
dot_prod = max (dot_prod, 0.0);
vec3 Id = Ld * Kd * dot_prod; // final diffuse intensity
// Specular is view dependent; get vector toward camera.
vec3 surface_to_viewer_eye = normalize (-position_eye);
// Phong
//vec3 reflection_eye = reflect (-direction_to_light_eye, normal_eye);
//float dot_prod_specular = dot (reflection_eye, surface_to_viewer_eye);
//dot_prod_specular = max (dot_prod_specular, 0.0);
//float specular_factor = pow (dot_prod_specular, specular_exponent);
// Blinn
vec3 half_way_eye = normalize (surface_to_viewer_eye + direction_to_light_eye);
float dot_prod_specular = max (dot (half_way_eye, normal_eye), 0.0);
float specular_factor = pow (dot_prod_specular, specular_exponent);
// Specular intensity
vec3 Is = Ls * Ks * specular_factor; // final specular intensity
// final color
fragment_color = vec4 (Is + Id + Ia, 1.0);
}
I type in the following command into the terminal to run my package:
./go fs.glsl vs.glsl Sphere.obj image.png image2.png
I am trying to map a world map.jpg to my sphere using this method and ignore the 2nd image input. But it won't run. Can someone tell me what I need to comment out in my fragment shader to ignore the second texture input so my code will run?
PS: How would I go about modifying my fragment shader to implement various types of 'tiling'? I'm a bit lost on this as well. Any examples or tips are appreciated.
Here is the texture portion of my main.cpp code.
// load textures
GLuint tex00;
int tex00location = glGetUniformLocation (shader_programme, "texture00");
glUniform1i (tex00location, 0);
glActiveTexture (GL_TEXTURE0);
assert (load_texture (argv[4], &tex00));
//assert (load_texture ("ship.png", &tex00));
GLuint tex01;
int tex01location = glGetUniformLocation (shader_programme, "texture01");
glUniform1i (tex01location, 1);
glActiveTexture (GL_TEXTURE1);
assert (load_texture (argv[5], &tex01));
/*---------------------------SET RENDERING DEFAULTS---------------------------*/
// Choose vertex and fragment shaders to use as well as view and proj matrices.
glUniformMatrix4fv (view_mat_location, 1, GL_FALSE, view_mat.m);
glUniformMatrix4fv (proj_mat_location, 1, GL_FALSE, proj_mat.m);
// The model matrix stores the position and orientation transformations for the mesh.
mat4 model_mat;
model_mat = translate (identity_mat4 () * scale(identity_mat4(), vec3(0.5, 0.5, 0.5)), vec3(0, -0.5, 0)) * rotate_y_deg (identity_mat4 (), 90 );
// Setup basic GL display attributes.
glEnable (GL_DEPTH_TEST); // enable depth-testing
glDepthFunc (GL_LESS); // depth-testing interprets a smaller value as "closer"
glEnable (GL_CULL_FACE); // cull face
glCullFace (GL_BACK); // cull back face
glFrontFace (GL_CCW); // set counter-clock-wise vertex order to mean the front
glClearColor (0.1, 0.1, 0.1, 1.0); // non-black background to help spot mistakes
glViewport (0, 0, g_gl_width, g_gl_height); // make sure correct aspect ratio

How to handle lightning (ambient, diffuse, specular) for point spheres in openGL

Initial situation
I want to visualize simulation data in openGL.
My data consists of particle positions (x, y, z) where each particle has some properties (like density, temperature, ...) which will be used for coloring. Those (SPH) particles (100k to several millions), grouped together, actually represent planets, in case you wonder. I want to render those particles as small 3D spheres and add ambient, diffuse and specular lighting.
Status quo and questions
In MY case: In which coordinate frame do I do the lightning calculations? Which way is the "best" to pass the various components through the pipeline?
I saw that it is common to do it in view space which is also very intuitive. However: The normals at the different fragment positions are calculated in the fragment shader in clip space coordinates (see appended fragment shader). Can I actually convert them "back" into view space to do the lightning calculations in view space for all the fragments? Would there be any advantage compared to doing it in clip space?
It would be easier to get the normals in view space if I would use meshes for each sphere but I think with several million particles this would decrease performance drastically, so better do it with sphere intersection, would you agree?
PS: I don't need a model matrix since all the particles are already in place.
//VERTEX SHADER
#version 330 core
layout (location = 0) in vec3 position;
layout (location = 2) in float density;
uniform float radius;
uniform vec3 lightPos;
uniform vec3 viewPos;
out vec4 lightDir;
out vec4 viewDir;
out vec4 viewPosition;
out vec4 posClip;
out float vertexColor;
// transformation matrices
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
lightDir = projection * view * vec4(lightPos - position, 1.0f);
viewDir = projection * view * vec4(viewPos - position, 1.0f);
viewPosition = projection * view * vec4(lightPos, 1.0f);
posClip = projection * view * vec4(position, 1.0f);
gl_Position = posClip;
gl_PointSize = radius;
vertexColor = density;
}
I know that projective divion happens for the gl_Position variable, does that actually happen to ALL vec4's which are passed from the vertex to the fragment shader? If not, maybe the calculations in the fragment shader would be wrong?
And the fragment shader where the normals and diffuse/specular lightning calculations in clip space:
//FRAGMENT SHADER
#version 330 core
in float vertexColor;
in vec4 lightDir;
in vec4 viewDir;
in vec4 posClip;
in vec4 viewPosition;
uniform vec3 lightColor;
vec4 colormap(float x); // returns vec4(r, g, b, a)
out vec4 vFragColor;
void main(void)
{
// AMBIENT LIGHT
float ambientStrength = 0.0;
vec3 ambient = ambientStrength * lightColor;
// Normal calculation done in clip space (first from texture (gl_PointCoord 0 to 1) coord to NDC( -1 to 1))
vec3 normal;
normal.xy = gl_PointCoord * 2.0 - vec2(1.0); // transform from 0->1 point primitive coords to NDC -1->1
float mag = dot(normal.xy, normal.xy); // sqrt(x=1) = sqrt(x)
if (mag > 1.0) // discard fragments outside sphere
discard;
normal.z = sqrt(1.0 - mag); // because x^2 + y^2 + z^2 = 1
// DIFFUSE LIGHT
float diff = max(0.0, dot(vec3(lightDir), normal));
vec3 diffuse = diff * lightColor;
// SPECULAR LIGHT
float specularStrength = 0.1;
vec3 viewDir = normalize(vec3(viewPosition) - vec3(posClip));
vec3 reflectDir = reflect(-vec3(lightDir), normal);
float shininess = 64;
float spec = pow(max(dot(vec3(viewDir), vec3(reflectDir)), 0.0), shininess);
vec3 specular = specularStrength * spec * lightColor;
vFragColor = colormap(vertexColor / 8) * vec4(ambient + diffuse + specular, 1);
}
Now this actually "kind of" works but i have the feeling that also the sides of the sphere which do NOT face the light source are being illuminated, which shouldn't happen. How can I fix this?
Some weird effect: In this moment the light source is actually BEHIND the left planet (it just peaks out a little bit at the top left), bit still there are diffuse and specular effects going on. This side should be actually pretty dark! =(
Also at this moment I get some glError: 1282 error in the fragment shader and I don't know where it comes from since the shader program actually compiles and runs, any suggestions? :)
The things that you are drawing aren't actually spheres. They just look like them from afar. This is absolutely ok if you are fine with that. If you need geometrically correct spheres (with correct sizes and with a correct projection), you need to do proper raycasting. This seems to be a comprehensive guide on this topic.
1. What coordinate system?
In the end, it is up to you. The coordinate system just needs to fulfill some requirements. It must be angle-preserving (because lighting is all about angles). And if you need distance-based attenuation, it should also be distance-preserving. The world and the view coordinate systems usually fulfill these requirements. Clip space is not suited for lighting calculations as neither angles nor distances are preserved. Furthermore, gl_PointCoord is in none of the usual coordinate systems. It is its own coordinate system and you should only use it together with other coordinate systems if you know their relation.
2. Meshes or what?
Meshes are absolutely not suited to render spheres. As mentioned above, raycasting or some screen-space approximation are better choices. Here is an example shader that I used in my projects:
#version 330
out vec4 result;
in fData
{
vec4 toPixel; //fragment coordinate in particle coordinates
vec4 cam; //camera position in particle coordinates
vec4 color; //sphere color
float radius; //sphere radius
} frag;
uniform mat4 p; //projection matrix
void main(void)
{
vec3 v = frag.toPixel.xyz - frag.cam.xyz;
vec3 e = frag.cam.xyz;
float ev = dot(e, v);
float vv = dot(v, v);
float ee = dot(e, e);
float rr = frag.radius * frag.radius;
float radicand = ev * ev - vv * (ee - rr);
if(radicand < 0)
discard;
float rt = sqrt(radicand);
float lambda = max(0, (-ev - rt) / vv); //first intersection on the ray
float lambda2 = (-ev + rt) / vv; //second intersection on the ray
if(lambda2 < lambda) //if the first intersection is behind the camera
discard;
vec3 hit = lambda * v; //intersection point
vec3 normal = (frag.cam.xyz + hit) / frag.radius;
vec4 proj = p * vec4(hit, 1); //intersection point in clip space
gl_FragDepth = ((gl_DepthRange.diff * proj.z / proj.w) + gl_DepthRange.near + gl_DepthRange.far) / 2.0;
vec3 vNormalized = -normalize(v);
float nDotL = dot(vNormalized, normal);
vec3 c = frag.color.rgb * nDotL + vec3(0.5, 0.5, 0.5) * pow(nDotL, 120);
result = vec4(c, frag.color.a);
}
3. Perspective division
Perspective division is not applied to your attributes. The GPU does perspective division on the data that you pass via gl_Position on the way to transforming them to screen space. But you will never actually see this perspective-divided position unless you do it yourself.
4. Light in the dark
This might be the result of you mixing different coordinate systems or doing lighting calculations in clip space. Btw, the specular part is usually not multiplied by the material color. This is light that gets reflected directly at the surface. It does not penetrate the surface (which would absorb some colors depending on the material). That's why those highlights are usually white (or whatever light color you have), even on black objects.

SpotLight is not seen - OpenGL

I am doing a project on spotlight in OpenGL. I guess I wrote the code correctly but I couldn't able to see a round spot in my output. Your help would be much appreciated. Here I am writing my fragment shader file and light definition.
fragmentShader.fs
#version 330
in vec3 N; // interpolated normal for the pixel
in vec3 v; // interpolated position for the pixel
// Uniform block for the light source properties
layout (std140) uniform LightSourceProp {
// Light source position in eye space (i.e. eye is at (0, 0, 0))
uniform vec4 lightSourcePosition;
uniform vec4 diffuseLightIntensity;
uniform vec4 specularLightIntensity;
uniform vec4 ambientLightIntensity;
// for calculating the light attenuation
uniform float constantAttenuation;
uniform float linearAttenuation;
uniform float quadraticAttenuation;
// Spotlight direction
uniform vec3 spotDirection;
uniform float cutOffExponent;
// Spotlight cutoff angle
uniform float spotCutoff;
};
// Uniform block for surface material properties
layout (std140) uniform materialProp {
uniform vec4 Kambient;
uniform vec4 Kdiffuse;
uniform vec4 Kspecular;
uniform float shininess;
};
out vec4 color;
// This fragment shader is an example of per-pixel lighting.
void main() {
// Now calculate the parameters for the lighting equation:
// color = Ka * Lag + (Ka * La) + attenuation * ((Kd * (N dot L) * Ld) + (Ks * ((N dot HV) ^ shininess) * Ls))
// Ka, Kd, Ks: surface material properties
// Lag: global ambient light (not used in this example)
// La, Ld, Ls: ambient, diffuse, and specular components of the light source
// N: normal
// L: light vector
// HV: half vector
// shininess
// attenuation: light intensity attenuation over distance and spotlight angle
vec3 lightVector;
float attenuation = 1.0;
float se;
// point light source
lightVector = normalize(lightSourcePosition.xyz - v);
//Calculate Spoteffect
// calculate attenuation
float angle = dot( normalize(spotDirection),
normalize(lightVector));
angle = max(angle,0);
// Test whether vertex is located in the cone
if(acos (angle) > radians(5))
{
float distance = length(lightSourcePosition.xyz - v);
angle = pow(angle,2.0);
attenuation = angle / (constantAttenuation + (linearAttenuation * distance)
+(quadraticAttenuation * distance * distance));
//calculate Diffuse Color
float NdotL = max(dot(N,lightVector), 0.0);
vec4 diffuseColor = Kdiffuse * diffuseLightIntensity * NdotL;
// calculate Specular color. Here we use the original Phong illumination model.
vec3 E = normalize(-v); // Eye vector. We are in Eye Coordinates, so EyePos is (0,0,0)
vec3 R = normalize(-reflect(lightVector,N)); // light reflection vector
float RdotE = max(dot(R,E),0.0);
vec4 specularColor = Kspecular * specularLightIntensity * pow(RdotE,shininess);
// ambient color
vec4 ambientColor = Kambient * ambientLightIntensity;
color = ambientColor + attenuation * (diffuseColor + specularColor);
}
else
color = vec4(1,1,0,1); // lit (yellow)
}
The light definition in main.cpp
struct SurfaceMaterialProp {
float Kambient[4]; //ambient component
float Kdiffuse[4]; //diffuse component
float Kspecular[4]; // Surface material property: specular component
float shininess;
};
SurfaceMaterialProp surfaceMaterial1 = {
{1.0f, 1.0f, 1.0f, 1.0f}, // Kambient: ambient coefficient
{1.0f, 0.8f, 0.72f, 1.0f}, // Kdiffuse: diffuse coefficient
{1.0f, 1.0f, 1.0f, 1.0f}, // Kspecular: specular coefficient
5.0f // Shininess
};
struct LightSourceProp {
float lightSourcePosition[4];
float diffuseLightIntensity[4];
float specularLightIntensity[4];
float ambientLightIntensity[4];
float constantAttenuation;
float linearAttenuation;
float quadraticAttenuation;
float spotlightDirection[4];
float spotlightCutoffAngle;
float cutOffExponent;
};
LightSourceProp lightSource1 = {
{ 0.0,400.0,0.0, 1.0 }, // light source position
{1.0f, 0.0f, 0.0f, 1.0f}, // diffuse light intensity
{1.0f, 0.0f, 0.0f, 1.0f}, // specular light intensity
{1.0f, 0.2f, 0.0f, 1.0f}, // ambient light intensity
1.0f, 0.5, 0.1f, // constant, linear, and quadratic attenuation factors
{0.0,50.0,0.0}, // spotlight direction
{5.0f}, // spotlight cutoff angle (in radian)
{2.0f} // spotexponent
};
The order of a couple of members of the LightSourceProp struct in the C++ code is different from the one in the uniform block.
Last two members of the uniform block:
uniform float cutOffExponent;
uniform float spotCutoff;
};
Last two members of C++ struct:
float spotlightCutoffAngle;
float cutOffExponent;
};
These two values are swapped.
Also, the cutoff angle looks suspiciously large:
{5.0f}, // spotlight cutoff angle (in radian)
That's an angle of 286 degrees, which isn't much of a spotlight. For an actual spotlight, you'll probably want something much smaller, like 0.1f or 0.2f.
Another aspect that might be giving you unexpected results is that you have a lot of ambient intensity:
{1.0f, 1.0f, 1.0f, 1.0f}, // Kambient: ambient coefficient
...
{1.0f, 0.2f, 0.0f, 1.0f}, // ambient light intensity
Depending on how you use these values in the shader code, it's likely that your colors will be saturated from the ambient intensity alone, and you won't get any visible contribution from the other terms of the light source and material. Since the ambient intensity is constant, this would result in a completely flat color for the entire geometry.

Trouble with Specular Lighting in OpenGL

I'm having some issues with my specular lighting, I have ambient and diffuse but I am now looking at specular to try make a nice looking model.
I have the following in my vertexShader:
#version 330
layout (location = 0) in vec3 Position;
layout (location = 1) in vec3 Normal;
out vec4 Colour0;
// Transforms
uniform mat4 gModelToWorldTransform;
uniform mat4 gWorldToViewToProjectionTransform;
// Ambient light parameters
uniform vec3 gAmbientLightIntensity;
// Directional light parameters
uniform vec3 gDirectionalLightIntensity;
uniform vec3 gDirectionalLightDirection;
// Material constants
uniform float gKa;
uniform float gKd;
uniform float gKs;
uniform float gKsStrength;
void main()
{
// Transform the vertex from local space to homogeneous clip space
vec4 vertexPositionInModelSpace = vec4(Position, 1);
vec4 vertexInWorldSpace = gModelToWorldTransform * vertexPositionInModelSpace;
vec4 vertexInHomogeneousClipSpace = gWorldToViewToProjectionTransform * vertexInWorldSpace;
gl_Position = vertexInHomogeneousClipSpace;
// Calculate the directional light intensity at the vertex
// Find the normal in world space and normalise it
vec3 normalInWorldSpace = (gModelToWorldTransform * vec4(Normal, 0.0)).xyz;
normalInWorldSpace = normalize(normalInWorldSpace);
// Calculate the ambient light intensity at the vertex
// Ia = Ka * ambientLightIntensity
vec4 ambientLightIntensity = gKa * vec4(gAmbientLightIntensity, 1.0);
// Setup the light direction and normalise it
vec3 lightDirection = normalize(-gDirectionalLightDirection);
//lightDirection = normalize(gDirectionalLightDirection);
// Id = kd * lightItensity * N.L
// Calculate N.L
float diffuseFactor = dot(normalInWorldSpace, lightDirection);
diffuseFactor = clamp(diffuseFactor, 0.0, 1.0);
// N.L * light source colour * intensity
vec4 diffuseLightIntensity = gKd * vec4(gDirectionalLightIntensity, 1.0f) * diffuseFactor;
vec3 lightReflect = normalize(reflect(gDirectionalLightDirection, Normal));
//Calculate the specular light intensity at the vertex
float specularFactor = dot(normalInWorldSpace, lightReflect);
specularFactor = pow(specularFactor, gKsStrength);
vec4 specularLightIntensity = gKs * vec4(gDirectionalLightIntensity, 1.0f) * specularFactor;
// Final vertex colour is the product of the vertex colour
// and the total light intensity at the vertex
vec4 colour = vec4(0.0, 1.0, 0.0, 1.0);
Colour0 = colour * (ambientLightIntensity + diffuseLightIntensity + specularLightIntensity);
}
Then in my main.cpp I have the some code to try and get this working together, the specular light is definitely doing something, only, rather than making the model look shiny, it seems to intensify the light to the point where I can't see any details.
I create the following variables:
// Lighting uniforms location
GLuint gAmbientLightIntensityLoc;
GLuint gDirectionalLightIntensityLoc;
GLuint gDirectionalLightDirectionLoc;
GLuint gSpecularLightIntensityLoc;
// Materials uniform location
GLuint gKaLoc;
GLuint gKdLoc;
GLuint gKsLoc;
GLuint gKsStrengthLoc;
I then set my variables like so in the renderSceneCallBack() function which is called in the main:
// Set the material properties
glUniform1f(gKaLoc, 0.2f);
glUniform1f(gKdLoc, 0.9f);
glUniform1f(gKsLoc, 0.5f);
glUniform1f(gKsStrengthLoc, 0.5f);
I then create a initLights() function which I would like to handle all lighting, this is also called in the main:
static void initLights()
{
// Setup the ambient light
vec3 ambientLightIntensity = vec3(0.2f, 0.2f, 0.2f);
glUniform3fv(gAmbientLightIntensityLoc, 1, &ambientLightIntensity[0]);
// Setup the direactional light
vec3 directionalLightDirection = vec3(0.0f, 0.0f, -1.0f);
normalize(directionalLightDirection);
glUniform3fv(gDirectionalLightDirectionLoc, 1, &directionalLightDirection[0]);
vec3 directionalLightIntensity = vec3(0.8f, 0.8f, 0.8f);
glUniform3fv(gDirectionalLightIntensityLoc, 1, &directionalLightIntensity[0]);
//Setup the specular Light
vec3 specularLightIntensity = vec3(0.5f, 0.5f, 0.5f);
glUniform3fv(gSpecularLightIntensityLoc, 1, &specularLightIntensity[0]);
}
Can anyone see what I might be doing wrong, I could have some of the calculatiuons wrong and I just don't see it. Both the ambient/diffuse lighting are working correctly. This photo illustrates whats currently happening, ambient on the left, diffuse in the middle and specular with strength set to 30 on the right.
Answer
I forgot to pass this value into the main:
gKsStrengthLoc = glGetUniformLocation(shaderProgram, "gKsStrength");
//assert(gDirectionalLightDirectionLoc != 0xFFFFFFFF);
Everything works now using the answer selected
Your value for gKsStrength looks way too small:
glUniform1f(gKsStrengthLoc, 0.5f);
This value controls how shiny the object is, with higher values making it more shiny. This makes sense if you look at the calculation in the shader:
specularFactor = pow(specularFactor, gKsStrength);
The larger the exponent, the faster the value drops off, which means that the specular highlight becomes more narrow.
Typical values might be something like 10.0f for moderately shiny, 30.0f for quite shiny, and even higher for very shiny materials.
With your value of 0.5f, you get a very wide specular "highlight". Your value for the specular intensity is also fairly high (0.5), so the highlight is going to cover most of the object, and saturate the colors for large parts.

Normal mapping and lighting gone wrong, not displaying correctly

I'm working on an implementation of normal mapping, calculating the tangent vectors via the ASSIMP library.
The normal mapping seems to work perfectly on objects that have a model matrix close to the identity matrix. As long as I start translating and scaling, my lighting seems off. As you can see in the picture the normal mapping works perfectly on the container cube, but the lighting fails on the large floor (direction of the specular light should be towards the player, not towards the container).
I get the feeling it somehow has something to do with the position of the light (currently traversing from x = -10 to x = 10 over time) not properly being included in the calculations as long as I start changing the model matrix (via translations/scaling). I'm posting all the relevant code and hope you guys can somehow see something I'm missing since I've been staring at my code for days.
Vertex shader
#version 330
layout(location = 0) in vec3 position;
layout(location = 1) in vec3 normal;
layout(location = 2) in vec3 tangent;
layout(location = 3) in vec3 color;
layout(location = 4) in vec2 texCoord;
// fragment pass through
out vec3 Position;
out vec3 Normal;
out vec3 Tangent;
out vec3 Color;
out vec2 TexCoord;
out vec3 TangentSurface2Light;
out vec3 TangentSurface2View;
uniform vec3 lightPos;
// vertex transformation
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
mat3 normalMatrix = transpose(mat3(inverse(view * model)));
Position = vec3((view * model) * vec4(position, 1.0));
Normal = normalMatrix * normal;
Tangent = tangent;
Color = color;
TexCoord = texCoord;
gl_Position = projection * view * model * vec4(position, 1.0);
vec3 light = vec3(view * vec4(lightPos, 1.0));
vec3 n = normalize(normalMatrix * normal);
vec3 t = normalize(normalMatrix * tangent);
vec3 b = cross(n, t);
mat3 mat = mat3(t.x, b.x ,n.x, t.y, b.y ,n.y, t.z, b.z ,n.z);
vec3 vector = normalize(light - Position);
TangentSurface2Light = mat * vector;
vector = normalize(-Position);
TangentSurface2View = mat * vector;
}
Fragment shader
#version 330
in vec3 Position;
in vec3 Normal;
in vec3 Tangent;
in vec3 Color;
in vec2 TexCoord;
in vec3 TangentSurface2Light;
in vec3 TangentSurface2View;
out vec4 outColor;
uniform vec3 lightPos;
uniform mat4 view;
uniform sampler2D texture0;
uniform sampler2D texture_normal; // normal
uniform float repeatFactor = 1;
void main()
{
vec4 texColor = texture(texture0, TexCoord * repeatFactor);
vec3 light = vec3(view * vec4(lightPos, 1.0));
float dist = length(light - Position);
float att = 1.0 / (1.0 + 0.01 * dist + 0.001 * dist * dist);
// Ambient
vec4 ambient = vec4(0.2);
// Diffuse
vec3 surface2light = normalize(TangentSurface2Light);
vec3 norm = normalize(texture(texture_normal, TexCoord * repeatFactor).xyz * 2.0 - 1.0);
float contribution = max(dot(norm, surface2light), 0.0);
vec4 diffuse = contribution * vec4(0.8);
// Specular
vec3 surf2view = normalize(TangentSurface2View);
vec3 reflection = reflect(-surface2light, norm); // reflection vector
float specContribution = pow(max(dot(surf2view, reflection), 0.0), 32);
vec4 specular = vec4(0.6) * specContribution;
outColor = (ambient + (diffuse * att)+ (specular * pow(att, 3))) * texColor;
}
OpenGL Drawing Code
void Render()
{
...
glm::mat4 view, projection; // Model will be done via MatrixStack
view = glm::lookAt(position, position + direction, up); // cam pos, look at (eye pos), up vec
projection = glm::perspective(45.0f, (float)width/(float)height, 0.1f, 1000.0f);
glUniformMatrix4fv(glGetUniformLocation(basicShader.shaderProgram, "view"), 1, GL_FALSE, glm::value_ptr(view));
glUniformMatrix4fv(glGetUniformLocation(basicShader.shaderProgram, "projection"), 1, GL_FALSE, glm::value_ptr(projection));
// Lighting
lightPos.x = 0.0 + sin(time / 125) * 10;
glUniform3f(glGetUniformLocation(basicShader.shaderProgram, "lightPos"), lightPos.x, lightPos.y, lightPos.z);
// Objects (use bump mapping on this cube)
bumpShader.Use();
glUniformMatrix4fv(glGetUniformLocation(bumpShader.shaderProgram, "view"), 1, GL_FALSE, glm::value_ptr(view));
glUniformMatrix4fv(glGetUniformLocation(bumpShader.shaderProgram, "projection"), 1, GL_FALSE, glm::value_ptr(projection));
glUniform3f(glGetUniformLocation(bumpShader.shaderProgram, "lightPos"), lightPos.x, lightPos.y, lightPos.z);
MatrixStack::LoadIdentity();
MatrixStack::Scale(2);
MatrixStack::ToShader(glGetUniformLocation(bumpShader.shaderProgram, "model"));
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, resources.GetTexture("container"));
glUniform1i(glGetUniformLocation(bumpShader.shaderProgram, "img"), 0);
glActiveTexture(GL_TEXTURE1); // Normal map
glBindTexture(GL_TEXTURE_2D, resources.GetTexture("container_normal"));
glUniform1i(glGetUniformLocation(bumpShader.shaderProgram, "normalMap"), 1);
glUniform1f(glGetUniformLocation(bumpShader.shaderProgram, "repeatFactor"), 1);
cubeNormal.Draw();
MatrixStack::LoadIdentity();
MatrixStack::Translate(glm::vec3(0.0f, -22.0f, 0.0f));
MatrixStack::Scale(glm::vec3(200.0f, 20.0f, 200.0f));
MatrixStack::ToShader(glGetUniformLocation(bumpShader.shaderProgram, "model"));
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, resources.GetTexture("floor"));
glActiveTexture(GL_TEXTURE1); // Normal map
glBindTexture(GL_TEXTURE_2D, resources.GetTexture("floor_normal"));
glUniform1f(glGetUniformLocation(bumpShader.shaderProgram, "repeatFactor"), 100);
cubeNormal.Draw();
MatrixStack::LoadIdentity();
glActiveTexture(GL_TEXTURE0);
...
}
EDIT
I now loaded my objects using the ASSIMP library with the 'aiProcess_CalcTangentSpace' flag enabled and changed my shaders accordingly to adapt to the new changes. Since ASSIMP now automatically calculates the correct tangent vectors I should have valid tangent vectors and my problem should be solved (as noted by Nicol Bolas), but I still have the same issue with the specular lighting acting strange and the diffuse lighting not really showing up. I guess there is still something else that is not working correctly. I unmarked your answer as the correct answer Nicol Bolas (for now) and updated my code accordingly since there is still something I'm missing.
It probably has something to do with translating. As soon as I add a translate (-22.0f in y direction) to the Model matrix it reacts with strange lighting. As long as the floor (which is actually a cube) has no translation the lighting looks fine.
calculating the tangent vectors in the vertex shader
Well there's your problem. That's not possible for an arbitrary surface.
The tangent and bitangent are not arbitrary vectors that are perpendicular to one another. They are model-space direction vectors that point in the direction of the texture coordinates. The tangent points in the direction of the S texture coordinate, and the bitangent points in the direction of the T texture coordinate (or U and V for the tex coords, if you prefer).
This effectively computes the orientation of the texture relative to each vertex on the surface. You need this orientation, because the way the texture is mapped to the surface matters when you want to make sense of a tangent-space vector.
Remember: tangent-space is the space perpendicular to a surface. But you need to know how that surface is mapped to the object in order to know where "up" is, for example. Take a square surface. You could map a texture so that the +Y part of the square is oriented along the +T direction of the texture. Or it could be along the +X of the square. You could even map it so that the texture is distorted, or rotated at an arbitrary angle.
The tangent and bitangent vectors are intended to correct for this mapping. They point in the S and T directions in model space. So, combined with the normal, they form a transformation matrix to transform from tangent-space into whatever space the 3 vectors are in (you generally transform the NBT to camera space or whatever space you use for lighting before using them).
You cannot compute them by just taking the normal and crossing it with some arbitrary vector. That produces a perpendicular normal, but not the right one.
In order to correctly compute the tangent/bitangent, you need access to more than one vertex. You need to be able to see how the texture coordinates change over the surface of the mesh, which is how you compute the S and T directions relative to the mesh.
Vertex shaders cannot access more than one vertex. Geometry shaders can't (generally) access enough vertices to do this either. Compute the tangent/bitangent off-line on the CPU.
mat3 mat = mat3(t.x, b.x ,n.x, t.y, b.y ,n.y, t.z, b.z ,n.z);
Is wrong. In order to use the tbn matrix correctly, you must transpose it,like so:
mat3 mat = transpose(mat3(t.x, b.x ,n.x, t.y, b.y ,n.y, t.z, b.z ,n.z));
then use it to transform your light and view vectors into tangent space. Alternatively(and less efficiently), pass the untransposed tbn matrix to the fragment shader, and use it to transform the sampled normal into view space. It's an easy thing to miss, but very important. See http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-13-normal-mapping/ for more info.
On a side note, a minor optimisation you can do for your vertex shader, is to calculate the normal matrix on the cpu, per mesh, as it will be the same for all vertices on the mesh, and so reduce unnecessary calculations.