I started lighting with several light sources. All the manuals that I saw without taking into account the distance between the light source and the object (for example https://learnopengl.com/Lighting/Basic-Lighting). So I wrote my shader, but I'm not sure about its correctness. Please, analyze this shader, and tell me what's wrong / not correct in it. I will be very grateful for any help! Below I bring the shader itself, and the results of its work for different values of n and k.
Fragment shader:
#version 130
precision mediump float; // Set the default precision to medium. We don't need as high of a
// precision in the fragment shader.
#define MAX_LAMPS_COUNT 8 // Max lamps count.
uniform vec3 u_LampsPos[MAX_LAMPS_COUNT]; // The position of lamps in eye space.
uniform vec3 u_LampsColors[MAX_LAMPS_COUNT];
uniform vec3 u_AmbientColor = vec3(1, 1, 1);
uniform sampler2D u_TextureUnit;
uniform float u_DiffuseIntensivity = 12;
uniform float ambientStrength = 0.1;
uniform int u_LampsCount;
varying vec3 v_Position; // Interpolated position for this fragment.
varying vec3 v_Normal; // Interpolated normal for this fragment.
varying vec2 v_Texture; // Texture coordinates.
// The entry point for our fragment shader.
void main() {
float n = 2;
float k = 2;
float finalDiffuse = 0;
vec3 finalColor = vec3(0, 0, 0);
for (int i = 0; i<u_LampsCount; i++) {
// Will be used for attenuation.
float distance = length(u_LampsPos[i] - v_Position);
// Get a lighting direction vector from the light to the vertex.
vec3 lightVector = normalize(u_LampsPos[i] - v_Position);
// Calculate the dot product of the light vector and vertex normal. If the normal and light vector are
// pointing in the same direction then it will get max illumination.
float diffuse = max(dot(v_Normal, lightVector), 0.1);
// Add attenuation.
diffuse = diffuse / (1 + pow(distance, n));
// Calculate final diffuse for fragment
finalDiffuse += diffuse;
// Calculate final light color
finalColor += u_LampsColors[i] / (1 + pow(distance, k));
}
finalColor /= u_LampsCount;
vec3 ambient = ambientStrength * u_AmbientColor;
vec3 diffuse = finalDiffuse * finalColor * u_DiffuseIntensivity;
gl_FragColor = vec4(ambient + diffuse, 1) * texture2D(u_TextureUnit, v_Texture);
}
Vertex shader:
#version 130
uniform mat4 u_MVPMatrix; // A constant representing the combined model/view/projection matrix.
uniform mat4 u_MVMatrix; // A constant representing the combined model/view matrix.
attribute vec4 a_Position; // Per-vertex position information we will pass in.
attribute vec3 a_Normal; // Per-vertex normal information we will pass in.
attribute vec2 a_Texture; // Per-vertex texture information we will pass in.
varying vec3 v_Position; // This will be passed into the fragment shader.
varying vec3 v_Normal; // This will be passed into the fragment shader.
varying vec2 v_Texture; // This will be passed into the fragment shader.
void main() {
// Transform the vertex into eye space.
v_Position = vec3(u_MVMatrix * a_Position);
// Pass through the texture.
v_Texture = a_Texture;
// Transform the normal's orientation into eye space.
v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0));
// gl_Position is a special variable used to store the final position.
// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
gl_Position = u_MVPMatrix * a_Position;
}
n=2 k=2
n=1 k=3
n=3 k=1
n=3 k=3
And if my shader is correct, then how do I name these parameters (n, k)?
By "correct" I assume you mean is the code working as well as it should. These lighting calculations are not by any means physically accurate. Unless you are going for full compatibility with old devices, I would recommend you use a higher glsl version which allows you to use in and out and some other useful glsl features. The current is version 450 and you are still using 130. The vertex shader looks ok, as it is only passing through values to the fragment shader.
As for the fragment shader there are is one optimisation you could make.
The calculation u_LampsPos[i] - v_Position doesn't have to be repeated twice. Do it once and do the length and normalize on the same result from one calculation.
The code is quite small so there is not much to go wrong glsl wise however I was wondering why you did: finalColor /= u_LampsCount;?
This didn't make sense to me.
Related
I've been trying to implement a simple light / shading system, a simple Phong lighting system without specular lights to be precise. It basically works, except it has some (in my opinion) nasty artifacts.
My first thought was that maybe this is a problem of the texture mipmaps, but disabling them didn't work. My next best guess would be a shader issue, but I can't seem to find the error.
Has anybody ever experienced a similiar issue or an idea on how to solve this?
Image of the artifacts
Vertex shader:
#version 330 core
// Vertex shader
layout(location = 0) in vec3 vpos;
layout(location = 1) in vec2 vuv;
layout(location = 2) in vec3 vnormal;
out vec2 uv; // UV coordinates
out vec3 normal; // Normal in camera space
out vec3 pos; // Position in camera space
out vec3 light[3]; // Vertex -> light vector in camera space
uniform mat4 mv; // View * model matrix
uniform mat4 mvp; // Proj * View * Model matrix
uniform mat3 nm; // Normal matrix for transforming normals into c-space
void main() {
// Pass uv coordinates
uv = vuv;
// Adjust normals
normal = nm * vnormal;
// Calculation of vertex in camera space
pos = (mv * vec4(vpos, 1.0)).xyz;
// Vector vertex -> light in camera space
light[0] = (mv * vec4(0.0,0.3,0.0,1.0)).xyz - pos;
light[1] = (mv * vec4(-6.0,0.3,0.0,1.0)).xyz - pos;
light[2] = (mv * vec4(0.0,0.3,4.8,1.0)).xyz - pos;
// Pass position after projection transformation
gl_Position = mvp * vec4(vpos, 1.0);
}
Fragment shader:
#version 330 core
// Fragment shader
layout(location = 0) out vec3 color;
in vec2 uv; // UV coordinates
in vec3 normal; // Normal in camera space
in vec3 pos; // Position in camera space
in vec3 light[3]; // Vertex -> light vector in camera space
uniform sampler2D tex;
uniform float flicker;
void main() {
vec3 n = normalize(normal);
// Ambient
color = 0.05 * texture(tex, uv).rgb;
// Diffuse lights
for (int i = 0; i < 3; i++) {
l = normalize(light[i]);
cos = clamp(dot(n,l), 0.0, 1.0);
length = length(light[i]);
color += 0.6 * texture(tex, uv).rgb * cos / pow(length, 2);
}
}
As the first comment says, it looks like your color computation is using insufficient precision. Try using mediump or highp floats.
Additionally, the length = length(light[i]); pow(length,2) expression is quite inefficient, and could also be a source of the observed banding; you should use dot(light[i],light[i]) instead.
So i found information about my problem described as "gradient banding", also discussed here. The problem appears to be in the nature of my textures, since both, only the "white" texture and the real texture are mostly grey/white and there are effectively 256 levels of grey when using 8 bit per color channel.
The solution would be to implement post-processing dithering or to use better textures.
I'm trying to implement phong shading in GLSL but am having some issues with the specular component.
The green light is the specular component. The light (a point light) travels in a circle above the plane. The specular highlight always points inward toward the Y axis about which the light rotates and fans out toward the diffuse reflection as seen in the image. It doesn't appear to be affected at all by the positioning of the camera and I'm not sure where I'm going wrong.
Vertex shader code:
#version 330 core
/*
* Phong Shading with with Point Light (Quadratic Attenutation)
*/
//Input vertex data
layout(location = 0) in vec3 vertexPosition_modelSpace;
layout(location = 1) in vec2 vertexUVs;
layout(location = 2) in vec3 vertexNormal_modelSpace;
//Output Data; will be interpolated for each fragment
out vec2 uvCoords;
out vec3 vertexPosition_cameraSpace;
out vec3 vertexNormal_cameraSpace;
//Uniforms
uniform mat4 mvMatrix;
uniform mat4 mvpMatrix;
uniform mat3 normalTransformMatrix;
void main()
{
vec3 normal = normalize(vertexNormal_modelSpace);
//Set vertices in clip space
gl_Position = mvpMatrix * vec4(vertexPosition_modelSpace, 1);
//Set output for UVs
uvCoords = vertexUVs;
//Convert vertex and normal into eye space
vertexPosition_cameraSpace = mat3(mvMatrix) * vertexPosition_modelSpace;
vertexNormal_cameraSpace = normalize(normalTransformMatrix * normal);
}
Fragment Shader Code:
#version 330 core
in vec2 uvCoords;
in vec3 vertexPosition_cameraSpace;
in vec3 vertexNormal_cameraSpace;
//out
out vec4 fragColor;
//uniforms
uniform sampler2D diffuseTex;
uniform vec3 lightPosition_cameraSpace;
void main()
{
const float materialAmbient = 0.025; //a touch of ambient
const float materialDiffuse = 0.65;
const float materialSpec = 0.35;
const float lightPower = 2.0;
const float specExponent = 2;
//--------------Set Colors and determine vectors needed for shading-----------------
//reflection colors- NOTE- diffuse and ambient reflections will use the texture color
const vec3 colorSpec = vec3(0,1,0); //Green spec color
vec3 diffuseColor = texture2D(diffuseTex, uvCoords).rgb; //Get color from the texture at fragment
const vec3 lightColor = vec3(1,1,1); //White light
//Re-normalize normal vectors : after interpolation they make not be unit length any longer
vec3 normVertexNormal_cameraSpace = normalize(vertexNormal_cameraSpace);
//Set camera vec
vec3 viewVec_cameraSpace = normalize(-vertexPosition_cameraSpace); //Since its view space, camera at origin
//Set light vec
vec3 lightVec_cameraSpace = normalize(lightPosition_cameraSpace - vertexPosition_cameraSpace);
//Set reflect vect
vec3 reflectVec_cameraSpace = normalize(reflect(-lightVec_cameraSpace, normVertexNormal_cameraSpace)); //reflect function requires incident vec; from light to vertex
//----------------Find intensity of each component---------------------
//Determine Light Intensity
float distance = abs(length(lightPosition_cameraSpace - vertexPosition_cameraSpace));
float lightAttenuation = 1.0/( (distance > 0) ? (distance * distance) : 1 ); //Quadratic
vec3 lightIntensity = lightPower * lightAttenuation * lightColor;
//Determine Ambient Component
vec3 ambientComp = materialAmbient * diffuseColor * lightIntensity;
//Determine Diffuse Component
float lightDotNormal = max( dot(lightVec_cameraSpace, normVertexNormal_cameraSpace), 0.0 );
vec3 diffuseComp = materialDiffuse * diffuseColor * lightDotNormal * lightIntensity;
vec3 specComp = vec3(0,0,0);
//Determine Spec Component
if(lightDotNormal > 0.0)
{
float reflectDotView = max( dot(reflectVec_cameraSpace, viewVec_cameraSpace), 0.0 );
specComp = materialSpec * colorSpec * pow(reflectDotView, specExponent) * lightIntensity;
}
//Add Ambient + Diffuse + Spec
vec3 phongFragRGB = ambientComp +
diffuseComp +
specComp;
//----------------------Putting it together-----------------------
//Out Frag color
fragColor = vec4( phongFragRGB, 1);
}
Just noting that the normalTransformMatrix seen in the Vertex shader is the inverse-transpose of the model-view matrix.
I am setting a vector from the vertex position to the light, to the camera, and the reflect vector, all in camera space. For the diffuse calculation I am taking the dot product of the light vector and the normal vector, and for the specular component I am taking the dot product of the reflection vector and the view vector. Perhaps there is some fundamental misunderstanding that I have with the algorithm?
I thought at first that the problem could be that I wasn't normalizing the normals entering the fragment shader after interpolation, but adding a line to normalize didn't affect the image. I'm not sure where to look.
I know that there a lot of phong shading questions on the site, but everyone seems to have a problem that is a bit different. If anyone can see where I am going wrong, please let me know. Any help is appreciated.
EDIT: Okay its working now! Just as jozxyqk suggested below, I needed to do a mat4*vec4 operation for my vertex position or lose the translation information. When I first made the change I was getting strange results until I realized that I was making the same mistake in my OpenGL code for the lightPosition_cameraSpace before I passed it to the shader (the mistake being that I was casting down the view matrix to a mat3 for the calculation instead of setting the light position vector as a vec4). Once I edited those lines the shader appears to be working properly! Thanks for the help, jozxqk!
I can see two parts which don't look right.
"vertexPosition_cameraSpace = mat3(mvMatrix) * vertexPosition_modelSpace" should be a mat4/vec4(x,y,z,1) multiply, otherwise it ignores the translation part of the modelview matrix.
2. distance uses the light position relative to the camera and not the vertex. Use lightVec_cameraSpace instead. (edit: missed the duplicated calculation)
I am attempting to get some simple diffuse lighting to work in GLSL. I have a cube that is being passed in as an array of points and I'm calculating the face normals inside my geometry shader (because I intend to deform the mesh at run-time so I'll need the new face normals.)
My problem is that the diffuse value is changing as I move the camera around the world. so the shading on a face of my cube changes as the camera moves. I have not been able to figure out what I am missing that is causing this. My shaders are as follows:
Vertex:
#version 330 core
layout(location = 0) in vec3 vertexPosition_modelspace;
uniform mat4 MVP;
void main(){
gl_Position = MVP * vec4(vertexPosition_modelspace,1);
}
Geometry:
#version 330
precision highp float;
layout (triangles) in;
layout (triangle_strip) out;
layout (max_vertices = 3) out;
out vec3 normal;
uniform mat4 MVP;
uniform mat4 MV;
void main(void)
{
for (int i = 0; i < gl_in.length(); i++) {
gl_Position = gl_in[i].gl_Position;
vec3 U = gl_in[1].gl_Position.xyz - gl_in[0].gl_Position.xyz;
vec3 V = gl_in[2].gl_Position.xyz - gl_in[0].gl_Position.xyz;
normal.x = (U.y * V.z) - (U.z * V.y);
normal.y = (U.z * V.x) - (U.x * V.z);
normal.z = (U.x * V.y) - (U.y * V.x);
normal = normalize(transpose(inverse(MV)) * vec4(normal,1)).xyz;
EmitVertex();
}
EndPrimitive();
}
Fragment:
#version 330 core
in vec3 normal;
out vec4 out_color;
const vec3 lightDir = vec3(-1,-1,1);
uniform mat4 MV;
void main()
{
vec3 nlightDir = normalize(vec4(lightDir,1)).xyz;
float diffuse = clamp(dot(nlightDir,normal),0,1);
out_color = vec4(diffuse*vec3(0,1,0),1.0);
}
Thanks
There are a lot of wrong things in your code. Most of your problems come from completely forgetting what space various vectors are in. You cannot meaningfully do computations between vectors that are in different spaces.
normal = normalize(transpose(inverse(MV)) * vec4(normal,1)).xyz;
By using 1 as the fourth component of the normal, you completely break this computation. It causes the normal to be translated, which is not appropriate.
Furthermore, your normal value is computed based on gl_Position. And gl_Position is in clip space, not model space. So all you get is the clip-space normal, which is not what you need, want, or can even use.
If you want to compute the camera-space normal, then compute it from camera-space positions. Or compute the model-space normal from model-space positions and use the model/view matrix to transform it to camera-space.
Also, do the inverse/transpose on the CPU and pass it to the shader. Oh, and take all of the normal computations out of the loop; you only need to do it once per triangle (store it in a local variable and copy it to the output for each vertex). And stop doing the cross-product manually; use the built-in GLSL cross function.
vec3 nlightDir = normalize(vec4(lightDir,1)).xyz;
This makes no more sense than using 1 as the forth component in your transform before. Just normalize lightDir directly.
Equally importantly, if you're doing lighting in camera space, then the light direction needs to change with the camera in order for it to remain in the same apparent direction in the world. So you're going to have to take your world-space light position and transform it to camera space (typically on the CPU, passed in as a uniform).
Since built-in uniforms such as gl_LightSource are now marked as deprecated in the latest versions of the OpenGL specification, I am currently implementing a basic lighting system (point lights right now) which receives all the light and material information through custom uniform variables.
I have implemented the light attenuation and specular highlights for a point light, and it seems to be working good, apart from a position glitch: I'm manually moving the light, altering its position along the X axis. The light source however (judging by the light it casts upon the square plane below it) doesn't seem to move along the X axis, but, rather, diagonally, on both the X and Z axes (possibly Y too, though it's not entirely a positioning bug).
Here's a screenshot of what the distortion looks like (the light is at -35, 5, 0, Suzanne ist at 0, 2, 0:
:
It looks OK when the light is at 0, 5, 0:
According to the OpenGL specification, all the default light computations take place in eye coordinates, which is what I'm trying to emulate here (hence the multiplication of the light position with the vMatrix). I am using just the view matrix, since applying the model transformation of the vertex batch being rendered to the light doesn't really make sense.
If it matters, all the plane's normals are pointing straight up - 0, 1, 0.
(Note: I fixed the issue now, thanks to msell and myAces! The following snippets are the corrected versions. There's also an option to add spotlight parameters to the light now (d3d style ones))
Here's the code I'm using in the vertex shader:
#version 330
uniform mat4 mvpMatrix;
uniform mat4 mvMatrix;
uniform mat4 vMatrix;
uniform mat3 normalMatrix;
uniform vec3 vLightPosition;
uniform vec3 spotDirection;
uniform bool useTexture;
uniform bool fogEnabled;
uniform float minFogDistance;
uniform float maxFogDistance;
in vec4 vVertex;
in vec3 vNormal;
in vec2 vTexCoord;
smooth out vec3 vVaryingNormal;
smooth out vec3 vVaryingLightDir;
smooth out vec2 vVaryingTexCoords;
smooth out float fogFactor;
smooth out vec4 vertPos_ec;
smooth out vec4 lightPos_ec;
smooth out vec3 spotDirection_ec;
void main() {
// Surface normal in eye coords
vVaryingNormal = normalMatrix * vNormal;
vec4 vPosition4 = mvMatrix * vVertex;
vec3 vPosition3 = vPosition4.xyz / vPosition4.w;
vec4 tLightPos4 = vMatrix * vec4(vLightPosition, 1.0);
vec3 tLightPos = tLightPos4.xyz / tLightPos4.w;
// Diffuse light
// Vector to light source (do NOT normalize this!)
vVaryingLightDir = tLightPos - vPosition3;
if(useTexture) {
vVaryingTexCoords = vTexCoord;
}
lightPos_ec = vec4(tLightPos, 1.0f);
vertPos_ec = vec4(vPosition3, 1.0f);
// Transform the light direction (for spotlights)
vec4 spotDirection_ec4 = vec4(spotDirection, 1.0f);
spotDirection_ec = spotDirection_ec4.xyz / spotDirection_ec4.w;
spotDirection_ec = normalMatrix * spotDirection;
// Projected vertex
gl_Position = mvpMatrix * vVertex;
// Fog factor
if(fogEnabled) {
float len = length(gl_Position);
fogFactor = (len - minFogDistance) / (maxFogDistance - minFogDistance);
fogFactor = clamp(fogFactor, 0, 1);
}
}
And this is the code I'm using in the fragment shader:
#version 330
uniform vec4 globalAmbient;
// ADS shading model
uniform vec4 lightDiffuse;
uniform vec4 lightSpecular;
uniform float lightTheta;
uniform float lightPhi;
uniform float lightExponent;
uniform int shininess;
uniform vec4 matAmbient;
uniform vec4 matDiffuse;
uniform vec4 matSpecular;
// Cubic attenuation parameters
uniform float constantAt;
uniform float linearAt;
uniform float quadraticAt;
uniform float cubicAt;
// Texture stuff
uniform bool useTexture;
uniform sampler2D colorMap;
// Fog
uniform bool fogEnabled;
uniform vec4 fogColor;
smooth in vec3 vVaryingNormal;
smooth in vec3 vVaryingLightDir;
smooth in vec2 vVaryingTexCoords;
smooth in float fogFactor;
smooth in vec4 vertPos_ec;
smooth in vec4 lightPos_ec;
smooth in vec3 spotDirection_ec;
out vec4 vFragColor;
// Cubic attenuation function
float att(float d) {
float den = constantAt + d * linearAt + d * d * quadraticAt + d * d * d * cubicAt;
if(den == 0.0f) {
return 1.0f;
}
return min(1.0f, 1.0f / den);
}
float computeIntensity(in vec3 nNormal, in vec3 nLightDir) {
float intensity = max(0.0f, dot(nNormal, nLightDir));
float cos_outer_cone = lightTheta;
float cos_inner_cone = lightPhi;
float cos_inner_minus_outer = cos_inner_cone - cos_outer_cone;
// If we are a point light
if(lightTheta > 0.0f) {
float cos_cur = dot(normalize(spotDirection_ec), -nLightDir);
// d3d style smooth edge
float spotEffect = clamp((cos_cur - cos_outer_cone) /
cos_inner_minus_outer, 0.0, 1.0);
spotEffect = pow(spotEffect, lightExponent);
intensity *= spotEffect;
}
float attenuation = att( length(lightPos_ec - vertPos_ec) );
intensity *= attenuation;
return intensity;
}
/**
* Phong per-pixel lighting shading model.
* Implements basic texture mapping and fog.
*/
void main() {
vec3 ct, cf;
vec4 texel;
float at, af;
if(useTexture) {
texel = texture2D(colorMap, vVaryingTexCoords);
} else {
texel = vec4(1.0f);
}
ct = texel.rgb;
at = texel.a;
vec3 nNormal = normalize(vVaryingNormal);
vec3 nLightDir = normalize(vVaryingLightDir);
float intensity = computeIntensity(nNormal, nLightDir);
cf = matAmbient.rgb * globalAmbient.rgb + intensity * lightDiffuse.rgb * matDiffuse.rgb;
af = matAmbient.a * globalAmbient.a + lightDiffuse.a * matDiffuse.a;
if(intensity > 0.0f) {
// Specular light
// - added *after* the texture color is multiplied so that
// we get a truly shiny result
vec3 vReflection = normalize(reflect(-nLightDir, nNormal));
float spec = max(0.0, dot(nNormal, vReflection));
float fSpec = pow(spec, shininess) * lightSpecular.a;
cf += intensity * vec3(fSpec) * lightSpecular.rgb * matSpecular.rgb;
}
// Color modulation
vFragColor = vec4(ct * cf, at * af);
// Add the fog to the mix
if(fogEnabled) {
vFragColor = mix(vFragColor, fogColor, fogFactor);
}
}
What math bug could be causing this distortion?
Edit 1:
I've updated the shader code. The attenuation is now being computed in the fragment shader, as it should have been all along. It's currently disabled, though - the bug doesn't have anything to do with the attenuation. When rendering only the attenuation factor of the light (see the last few lines of the fragment shader), the attenuation is being computed right. This means that the light position is being correctly transformed into eye coordinates, so it can't be the source of the bug.
The last few lines of the fragment shader can be used for some (slightly hackish but nevertheless insightful) debugging - it seems the intensity of the light is not being computed right per-fragment, though I have no idea why.
What's interesting is that this bug is only noticeable on (very) large quads like the floor in the images. It's not noticeable on small models.
Edit 2:
I've updated the shader code to a working version. It's all good now and I hope it helps any future user reading this, since as of today, I have yet to see any glsl tutorial that implements lights with absolutely no fixed functionality and secret implicit transforms (such as gl_LightSource[i].* and the implicit transformations to eye space).
My code is licensed under the BSD 2-clause license and can be found on GitHub!
I recently had a similar problem, where lighting worked somewhat wrong when using large polygons. The problem was normalizing the eye vector in vertex shader, as interpolating normalized values procudes incorrect results.
Change
vVaryingLightDir = normalize( tLightPos - vPosition3 );
to
vVaryingLightDir = tLightPos - vPosition3;
in your vertex shader. You can keep the normalization in the fragment shader.
Just because I noticed:
vec3 tLightPos = (vMatrix * vec4(vLightPosition, 1.0)).xyz;
you are simply eliminating the homogenous coordinate here, without dividing through it first. This will cause some problems.
When I use my shaders I get following results:
One problem is that specular light is sort of deformed and you could see sphere triangles, another is, that I can see specular where I shouldn't (second image). One spehere light is done in vertex shader, other in fragment.
Here is how my vertex light shader looks like:
Vertex:
// Material data.
uniform vec3 uAmbient;
uniform vec3 uDiffuse;
uniform vec3 uSpecular;
uniform float uSpecIntensity;
uniform float uTransparency;
uniform mat4 uWVP;
uniform mat3 uN;
uniform vec3 uSunPos;
uniform vec3 uEyePos;
attribute vec4 attrPos;
attribute vec3 attrNorm;
varying vec4 varColor;
void main(void)
{
vec3 N = uN * attrNorm;
vec3 L = normalize(uSunPos);
vec3 H = normalize(L + uEyePos);
float df = max(0.0, dot(N, L));
float sf = max(0.0, dot(N, H));
sf = pow(sf, uSpecIntensity);
vec3 col = uAmbient + uDiffuse * df + uSpecular * sf;
varColor = vec4(col, uTransparency);
gl_Position = uWVP * attrPos;
}
Fragment:
varying vec4 varColor;
void main(void)
{
//vec4 col = texture2D(texture_0, varTexCoords);
//col.r += uLightDir.x;
//col.rgb = vec3(pow(gl_FragCoord.z, 64));
gl_FragColor = varColor;
}
It is possible, that my supplied data from code is wrong.
uN is world matrix (not inverted and not transposed, even though doing that didn't seem to do anything different).
UWVP - world view projection matrix.
Any ideas, as to where the problem might be, would be appreciated.
[EDIT] Here is my light calculations done in fragment:
Vertex shader file:
uniform mat4 uWVP;
uniform mat3 uN;
attribute vec4 attrPos;
attribute vec3 attrNorm;
varying vec3 varEyeNormal;
void main(void)
{
varEyeNormal = uN * attrNorm;
gl_Position = uWVP * attrPos;
}
Fragment shader file:
// Material data.
uniform vec3 uAmbient;
uniform vec3 uDiffuse;
uniform vec3 uSpecular;
uniform float uSpecIntensity;
uniform float uTransparency;
uniform vec3 uSunPos;
uniform vec3 uEyePos;
varying vec3 varEyeNormal;
void main(void)
{
vec3 N = varEyeNormal;
vec3 L = normalize(uSunPos);
vec3 H = normalize(L + uEyePos);
float df = max(0.0, dot(N, L));
float sf = max(0.0, dot(N, H));
sf = pow(sf, uSpecIntensity);
vec3 col = uAmbient + uDiffuse * df + uSpecular * sf;
gl_FragColor = vec4(col, uTransparency);
}
[EDIT2] As Joakim pointed out, I wasn't normalizing varEyeNormal in fragment shader. After fixing that, the result is much better in fragment shader. I also used normalize function on uEyePos, so specular no longer goes to dark side. Thanks for all the help.
Short answer: You need to normalize varEyeNormal in the fragment shader, instead of in the vertex shader.
Longer answer:
In order to get smooth lighting you will need to calculate the normals per pixel instead of per vertex. Varyings computed in the vertex shader are linearily interpolated before passed to the fragment shader, which works great as a shortcut in some cases but worse in others.
The reason you are seeing the triangle edges is because of the interpolation between normals result in shorter than 1.0 normals in all pixels between the vertices.
To correct this you will need to normalize the normals in the fragment shader instead of in the vertex shader.
The Normal matrix should be upper 3x3 of the transpose of the inverse of the Modelview matrix, which is equivalent to the upper 3x3 of of the Modelview matrix if the Modelview contains only rotations and translations (no scaling)
For more about the Normal matrix, see: http://www.lighthouse3d.com/tutorials/glsl-tutorial/the-normal-matrix/
(Edited for correctness according to comment below.)