opengl - spot light moves when camera rotates when using a shader - opengl

I implemented a simple shader for the lighting; it kind of works, but the light seems to move when the camera rotates (and only when it rotates).
I'm experimenting with a spotlight, this is how it looks like (it's the spot in the center):
If now I rotate the camera, the spot moves around; for example, here I looked down (I didn't move at all, just looked down) and it seemed at my feet:
I've looked it up and I've seen that it's a common mistake when mixing reference systems in the shader and/or when setting the light's position before moving the camera.
The thing is, I'm pretty sure I'm not doing these two things, but apparently I'm wrong; it's just that I can't find the bug.
Here's the shader:
Vertex Shader
varying vec3 vertexNormal;
varying vec3 lightDirection;
void main()
{
vertexNormal = gl_NormalMatrix * gl_Normal;
lightDirection = vec3(gl_LightSource[0].position.xyz - (gl_ModelViewMatrix * gl_Vertex).xyz);
gl_Position = ftransform();
}
Fragment Shader
uniform vec3 ambient;
uniform vec3 diffuse;
uniform vec3 specular;
uniform float shininess;
varying vec3 vertexNormal;
varying vec3 lightDirection;
void main()
{
vec3 color = vec3(0.0, 0.0, 0.0);
vec3 lightDirNorm;
vec3 eyeVector;
vec3 half_vector;
float diffuseFactor;
float specularFactor;
float attenuation;
float lightDistance;
vec3 normalDirection = normalize(vertexNormal);
lightDirNorm = normalize(lightDirection);
eyeVector = vec3(0.0, 0.0, 1.0);
half_vector = normalize(lightDirNorm + eyeVector);
diffuseFactor = max(0.0, dot(normalDirection, lightDirNorm));
specularFactor = max(0.0, dot(normalDirection, half_vector));
specularFactor = pow(specularFactor, shininess);
color += ambient * gl_LightSource[0].ambient;
color += diffuseFactor * diffuse * gl_LightSource[0].diffuse;
color += specularFactor * specular * gl_LightSource[0].specular;
lightDistance = length(lightDirection[i]);
float constantAttenuation = 1.0;
float linearAttenuation = (0.02 / SCALE_FACTOR) * lightDistance;
float quadraticAttenuation = (0.0 / SCALE_FACTOR) * lightDistance * lightDistance;
attenuation = 1.0 / (constantAttenuation + linearAttenuation + quadraticAttenuation);
// If it's a spotlight
if(gl_LightSource[i].spotCutoff <= 90.0)
{
float spotEffect = dot(normalize(gl_LightSource[0].spotDirection), normalize(-lightDirection));
if (spotEffect > gl_LightSource[0].spotCosCutoff)
{
spotEffect = pow(spotEffect, gl_LightSource[0].spotExponent);
attenuation = spotEffect / (constantAttenuation + linearAttenuation + quadraticAttenuation);
}
else
attenuation = 0.0;
}
color = color * attenuation;
// Moltiplico il colore per il fattore di attenuazione
gl_FragColor = vec4(color, 1.0);
}
Now, I can't show you the code where I render the things, because it's a custom language which integrates opengl and it's designed to create 3D applications (it wouldn't help to show you); but what I do is something like this:
SetupLights();
UpdateCamera();
RenderStuff();
Where:
SetupLights contains actual opengl calls that setup the lights and their positions;
UpdateCamera updates the camera's position using the built-in classes of the language; I don't have much power here;
RenderStuff calls the built-in functions of the language to draw the scene; I don't have much power here either.
So, either I'm doing something wrong in the shader or there's something in the language that "behind the scenes" breaks things.
Can you point me in the right direction?

you wrote
the light's position is already in world coordinates, and that is where I'm doing the computations
however, since you're applying gl_ModelViewMatrix to your vertex and gl_NormalMatrix to your normal, these values are probably in view space, which might cause the moving light.
as an aside, your eye vector looks like it should be in view coordinates, however, view space is a right-handed coordinate system, so "forward" points along the negative z-axis. also, your specular computation will likely be off since you're using the same eye vector for all fragments, but it should probably point towards that fragment's position on the near/far planes.

Related

GLSL shadow mapping: shadow2DProj causes rendering artifacts

Managed to get Shadow Mapping to work in my OpenGL rendering engine, but it is producing some weird artifacts that I think are "shadow acne". However, I am using shadow2DProj to get the shadow value from the shadow depth map, which for me has proven to be the only way to get shadows to show up at all. Therefore, looking around at various tutorial at learnopengl, opengl-tutorials and others have yielded no help. Would like some advice as to how I could mitigate this problem.
Here is my shader that I use to draw the shadow map with:
#version 330 core
out vec4 FragColor;
struct Light {
vec3 position;
vec3 ambient;
vec3 diffuse;
vec3 specular;
vec3 attenuation;
};
in vec3 FragPos;
in vec3 Normal;
in vec2 TexCoords;
in vec4 ShadowCoords;
uniform vec3 viewPos;
uniform sampler2D diffuseMap;
uniform sampler2D specularMap;
uniform sampler2DShadow shadowMap;
uniform Light lights[4];
uniform float shininess;
float calculateShadow(vec3 lightDir)
{
float shadowValue = shadow2DProj(shadowMap, ShadowCoords).r;
float shadow = shadowValue;
return shadow;
}
vec3 calculateAmbience(Light light, vec3 textureMap)
{
return light.ambient * textureMap;
}
void main()
{
vec4 tex = texture(diffuseMap, TexCoords);
if (tex.a < 0.5)
{
discard;
}
vec3 ambient = vec3(0.0);
vec3 diffuse = vec3(0.0);
vec3 specular = vec3(0.0);
vec3 norm = normalize(Normal);
vec3 viewDir = normalize(viewPos - FragPos);
for (int i = 0; i < 4; i++)
{
ambient = ambient + lights[i].ambient * tex.rgb;
vec3 lightDir = normalize(lights[i].position - FragPos);
float diff = max(dot(norm, lightDir), 0.0);
diffuse = diffuse + (lights[i].diffuse * diff * tex.rgb);
vec3 reflectDir = reflect(-lightDir, norm);
float spec = pow(max(dot(viewDir, reflectDir), 0.0), shininess);
specular = specular + (lights[i].specular * spec * tex.rgb);
float dist = length(lights[i].position - FragPos);
float attenuation = lights[i].attenuation.x + (lights[i].attenuation.y * dist) + (lights[i].attenuation.z * (dist * dist));
if (attenuation > 0.0)
{
ambient *= 1.0 / attenuation;
diffuse *= 1.0 / attenuation;
specular *= 1.0 / attenuation;
}
}
float shadow = calculateShadow(normalize(lights[0].position - FragPos));
vec3 result = (ambient + (shadow) * (diffuse + specular));
FragColor = vec4(result, 1.0);
}
This is the result I get. Notice the weird stripes on top of the cube:
Reading the description about shadow acne, this seems to be the same phenomenon (source: https://learnopengl.com/Advanced-Lighting/Shadows/Shadow-Mapping).
According to that article, I need to check if the ShadowCoord depth value, minus a bias constant, is lower then the shadow value read from the shadow map. If so, we have shadow. Now... here comes the problem. Since I am using shadow2DProj and not texture() to get my shadow value from the shadow map (through some intricate sorcery no doubt), I am unable to "port" that article's code into my shader and get it to work. Here is what I have tried:
float calculateShadow(vec3 lightDir)
{
float closestDepth = shadow2DProj(shadowMap, ShadowCoords).r;
float bias = 0.005;
float currentDepth = ShadowCoords.z;
float shadow = currentDepth - bias > closestDepth ? 1.0 : 0.0;
return shadow;
}
But that produces no shadows at all, since the "shadow" float is always assigned 1.0 from the depth & bias check. I must admit that I do not fully understand what I am getting from using shadow2DProj(...).r as compared to texture(...).r, but it sure is something completely different.
This question has a misunderstanding of what shadow2DProj does. The function does not return a depth value, but a depth comparison result. Therefore, apply the bias before calling it.
Solution 1
Apply the bias prior to running the comparison. ShadowCoords.z is your currentDepth value.
float calculateShadow(vec3 lightDir)
{
const float bias = 0.005;
float shadow = shadow2DProj(shadowMap, vec3(ShadowCoords.uv, ShadowCoords.z - bias)).r;
return shadow;
}
Solution 2
Apply the bias while performing the light-space depth pass.
glPolygonOffset(float factor, float units)
This function offsets Z-axis values by factor * DZ + units where DZ is the z-axis slope of the polygon. Setting this to positive values moves polygons deeper into the scene, which acts like our bias.
During initialization:
glEnable(GL_POLYGON_OFFSET_FILL);
During Light Depth Pass:
// These parameters will need to be tweaked for your scene
// to prevent acne and mitigate peter panning
glPolygonOffset(1.0, 1.0);
// draw potential shadow casters
// return to default settings (no offset)
glPolygonOffset(0, 0);
Shader Code:
// we don't even need the light direction for slope bias
float calculateShadow()
{
float shadow = shadow2DProj(shadowMap, ShadowCoords).r;
return shadow;
}

How to handle lightning (ambient, diffuse, specular) for point spheres in openGL

Initial situation
I want to visualize simulation data in openGL.
My data consists of particle positions (x, y, z) where each particle has some properties (like density, temperature, ...) which will be used for coloring. Those (SPH) particles (100k to several millions), grouped together, actually represent planets, in case you wonder. I want to render those particles as small 3D spheres and add ambient, diffuse and specular lighting.
Status quo and questions
In MY case: In which coordinate frame do I do the lightning calculations? Which way is the "best" to pass the various components through the pipeline?
I saw that it is common to do it in view space which is also very intuitive. However: The normals at the different fragment positions are calculated in the fragment shader in clip space coordinates (see appended fragment shader). Can I actually convert them "back" into view space to do the lightning calculations in view space for all the fragments? Would there be any advantage compared to doing it in clip space?
It would be easier to get the normals in view space if I would use meshes for each sphere but I think with several million particles this would decrease performance drastically, so better do it with sphere intersection, would you agree?
PS: I don't need a model matrix since all the particles are already in place.
//VERTEX SHADER
#version 330 core
layout (location = 0) in vec3 position;
layout (location = 2) in float density;
uniform float radius;
uniform vec3 lightPos;
uniform vec3 viewPos;
out vec4 lightDir;
out vec4 viewDir;
out vec4 viewPosition;
out vec4 posClip;
out float vertexColor;
// transformation matrices
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
lightDir = projection * view * vec4(lightPos - position, 1.0f);
viewDir = projection * view * vec4(viewPos - position, 1.0f);
viewPosition = projection * view * vec4(lightPos, 1.0f);
posClip = projection * view * vec4(position, 1.0f);
gl_Position = posClip;
gl_PointSize = radius;
vertexColor = density;
}
I know that projective divion happens for the gl_Position variable, does that actually happen to ALL vec4's which are passed from the vertex to the fragment shader? If not, maybe the calculations in the fragment shader would be wrong?
And the fragment shader where the normals and diffuse/specular lightning calculations in clip space:
//FRAGMENT SHADER
#version 330 core
in float vertexColor;
in vec4 lightDir;
in vec4 viewDir;
in vec4 posClip;
in vec4 viewPosition;
uniform vec3 lightColor;
vec4 colormap(float x); // returns vec4(r, g, b, a)
out vec4 vFragColor;
void main(void)
{
// AMBIENT LIGHT
float ambientStrength = 0.0;
vec3 ambient = ambientStrength * lightColor;
// Normal calculation done in clip space (first from texture (gl_PointCoord 0 to 1) coord to NDC( -1 to 1))
vec3 normal;
normal.xy = gl_PointCoord * 2.0 - vec2(1.0); // transform from 0->1 point primitive coords to NDC -1->1
float mag = dot(normal.xy, normal.xy); // sqrt(x=1) = sqrt(x)
if (mag > 1.0) // discard fragments outside sphere
discard;
normal.z = sqrt(1.0 - mag); // because x^2 + y^2 + z^2 = 1
// DIFFUSE LIGHT
float diff = max(0.0, dot(vec3(lightDir), normal));
vec3 diffuse = diff * lightColor;
// SPECULAR LIGHT
float specularStrength = 0.1;
vec3 viewDir = normalize(vec3(viewPosition) - vec3(posClip));
vec3 reflectDir = reflect(-vec3(lightDir), normal);
float shininess = 64;
float spec = pow(max(dot(vec3(viewDir), vec3(reflectDir)), 0.0), shininess);
vec3 specular = specularStrength * spec * lightColor;
vFragColor = colormap(vertexColor / 8) * vec4(ambient + diffuse + specular, 1);
}
Now this actually "kind of" works but i have the feeling that also the sides of the sphere which do NOT face the light source are being illuminated, which shouldn't happen. How can I fix this?
Some weird effect: In this moment the light source is actually BEHIND the left planet (it just peaks out a little bit at the top left), bit still there are diffuse and specular effects going on. This side should be actually pretty dark! =(
Also at this moment I get some glError: 1282 error in the fragment shader and I don't know where it comes from since the shader program actually compiles and runs, any suggestions? :)
The things that you are drawing aren't actually spheres. They just look like them from afar. This is absolutely ok if you are fine with that. If you need geometrically correct spheres (with correct sizes and with a correct projection), you need to do proper raycasting. This seems to be a comprehensive guide on this topic.
1. What coordinate system?
In the end, it is up to you. The coordinate system just needs to fulfill some requirements. It must be angle-preserving (because lighting is all about angles). And if you need distance-based attenuation, it should also be distance-preserving. The world and the view coordinate systems usually fulfill these requirements. Clip space is not suited for lighting calculations as neither angles nor distances are preserved. Furthermore, gl_PointCoord is in none of the usual coordinate systems. It is its own coordinate system and you should only use it together with other coordinate systems if you know their relation.
2. Meshes or what?
Meshes are absolutely not suited to render spheres. As mentioned above, raycasting or some screen-space approximation are better choices. Here is an example shader that I used in my projects:
#version 330
out vec4 result;
in fData
{
vec4 toPixel; //fragment coordinate in particle coordinates
vec4 cam; //camera position in particle coordinates
vec4 color; //sphere color
float radius; //sphere radius
} frag;
uniform mat4 p; //projection matrix
void main(void)
{
vec3 v = frag.toPixel.xyz - frag.cam.xyz;
vec3 e = frag.cam.xyz;
float ev = dot(e, v);
float vv = dot(v, v);
float ee = dot(e, e);
float rr = frag.radius * frag.radius;
float radicand = ev * ev - vv * (ee - rr);
if(radicand < 0)
discard;
float rt = sqrt(radicand);
float lambda = max(0, (-ev - rt) / vv); //first intersection on the ray
float lambda2 = (-ev + rt) / vv; //second intersection on the ray
if(lambda2 < lambda) //if the first intersection is behind the camera
discard;
vec3 hit = lambda * v; //intersection point
vec3 normal = (frag.cam.xyz + hit) / frag.radius;
vec4 proj = p * vec4(hit, 1); //intersection point in clip space
gl_FragDepth = ((gl_DepthRange.diff * proj.z / proj.w) + gl_DepthRange.near + gl_DepthRange.far) / 2.0;
vec3 vNormalized = -normalize(v);
float nDotL = dot(vNormalized, normal);
vec3 c = frag.color.rgb * nDotL + vec3(0.5, 0.5, 0.5) * pow(nDotL, 120);
result = vec4(c, frag.color.a);
}
3. Perspective division
Perspective division is not applied to your attributes. The GPU does perspective division on the data that you pass via gl_Position on the way to transforming them to screen space. But you will never actually see this perspective-divided position unless you do it yourself.
4. Light in the dark
This might be the result of you mixing different coordinate systems or doing lighting calculations in clip space. Btw, the specular part is usually not multiplied by the material color. This is light that gets reflected directly at the surface. It does not penetrate the surface (which would absorb some colors depending on the material). That's why those highlights are usually white (or whatever light color you have), even on black objects.

Struggling with casting cloud shadow on earth sphere in OpenGL

I am trying to do an earth simulation in OpenGL with GLSL shaders, and so far it's been going decent. Although I am stuck with a slightly small problem. Right now I have 3 spheres, one for ground level (earth), one for clouds and the third for the atmosphere (scattering effects). The earth sphere handles with most of the textures.
The cloud sphere is a slightly bigger sphere than the earth sphere, and is mapped with a cloud texture and normal mapped using one created with the photoshop plugin. One more thing to point out is, the rotation speed of the cloud sphere is slightly greater than the rotation speed of the earth sphere.
This is where things get confusing for me. I am trying to cast the shadow of the clouds onto the ground (earth) sphere by passing the cloud texture into the earth sphere's shader and subtracting the cloud's color from earth's color. But since the rotation speeds of the two sphere's are different, I figured if I multiplied the rotation matrix of the cloud sphere with the uv coordinates for the cloud texture, that should solve the problem. But sadly, the shadows and the clouds do not seem to rotate in sync. I was hoping if anyone can help me figure out the math to make the shadows and the cloud rotate in sync with each other, no matter how different the rotation speeds of the two sphere are.
Here is my fragment shader for the earth where I'm calculating the cloud's shadow:
#version 400 core
uniform sampler2D day;
uniform sampler2D bumpMap;
uniform sampler2D night;
uniform sampler2D specMap;
uniform sampler2D clouds;
uniform mat4 cloudRotation;
in vec3 vPos;
in vec3 lightVec;
in vec3 eyeVec;
in vec3 halfVec;
in vec2 texCoord;
out vec4 frag_color;
void main()
{
vec3 normal = 2.0 * texture(bumpMap, texCoord).rgb - 1.0;
//normal.z = 1 - normal.x * normal.x - normal.y * normal.y;
normal = normalize ( normal );
vec4 spec = vec4(1.0, 0.941, 0.898, 1.0);
vec4 specMapColor = texture2D(specMap, texCoord);
vec3 L = lightVec;
vec3 N = normal;
vec3 Emissive = normalize(-vPos);
vec3 R = reflect(-L, N);
float dotProd = max(dot(R, Emissive), 0.0);
vec4 specColor = spec * pow(dotProd,6.0) * 0.5;
float diffuse = max(dot(N, L), 0.0);
vec2 cloudTexCoord = vec2(cloudRotation * vec4(texCoord, 0.0, 1.0));
vec3 cloud_color = texture2D( clouds, cloudTexCoord).rgb;
vec3 day_color = texture2D( day, texCoord ).rgb * diffuse + specColor.rgb * specMapColor.g - cloud_color * 0.25;// * (1 - cloud_color.r) + cloud_color.r * diffuse;
vec3 night_color = texture2D( night, texCoord ).rgb * 0.5;// * (1 - cloud_color.r) * 0.5;
vec3 color = day_color;
if(dot(N, L) < 0.1)
color = mix(night_color, day_color, (diffuse + 0.1) * 5.0);
frag_color = vec4(color, 1.0);
}
Here's a sample output as a result of the above shader. Note that the shadows start out at the correct position, but the due to the wrong rotation speed, they tend to move ahead of the rotation of the cloud sphere.
Again, it would be really helpful if anyone can help me figure out the math behind keep the shadow and the clouds in sync
Thanks in advance

GLSL point light shader moving with camera

I've been trying to make a basic static point light using shaders for an LWJGL game, but it appears as if the light is moving as the camera's position is being translated and rotated. These shaders are slightly modified from the OpenGL 4.3 guide, so I'm not sure why they aren't working as intended. Can anyone explain why these shaders aren't working as intended and what I can do to get them to work?
Vertex Shader:
varying vec3 color, normal;
varying vec4 vertexPos;
void main() {
color = vec3(0.4);
normal = normalize(gl_NormalMatrix * gl_Normal);
vertexPos = gl_ModelViewMatrix * gl_Vertex;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
Fragment Shader:
varying vec3 color, normal;
varying vec4 vertexPos;
void main() {
vec3 lightPos = vec3(4.0);
vec3 lightColor = vec3(0.75);
vec3 lightDir = lightPos - vertexPos.xyz;
float lightDist = length(lightDir);
float attenuation = 1.0 / (3.0 + 0.007 * lightDist + 0.000008 * lightDist * lightDist);
float diffuse = max(0.0, dot(normal, lightDir));
vec3 ambient = vec3(0.4, 0.4, 0.4);
vec3 finalColor = color * (ambient + lightColor * diffuse * attenuation);
gl_FragColor = vec4(finalColor, 1.0);
}
If anyone's interested, I ended up finding the solution. Removing the calls to gl_NormalMatrix and gl_ModelViewMatrix solved the problem.
The critical value here, lightPos, was being set as a function of vertexPos, which you have expressed in screen space (this happened because its original world space form was multiplied by modelView). Screen space stays with the camera, not anything in the 3D world. So to have a non-moving light source with respect to some absolute point in world space (like [4.0, 4.0, 4.0]), you could just leave your object's points in that space as you found out.
But getting rid of modelview is not a good idea, since the whole point of the model matrix is to place your objects where they belong (so you can re-use your vertex arrays with changes only to the model matrix, instead of burdening them with specifying every single shape's vertex positions from scratch).
A better way is to perform the modelView multiplication on both vertexPos AND lightPos. This way you're treating lightPos as originally a quantity in world space, but then doing the comparison in screen space. The math to get light intensities from normals will work out to the same in either space and you'll get a correct looking light source.

Generic GLSL Lighting Shader

Pixel based lighting is a common issue in many OpenGL applications, as the standard OpenGL lighting has very poor quality.
I want to use a GLSL program to have per-pixel based lighting in my OpenGL program instead of per-vertex. Just Diffuse lighting, but with fog, texture and texture-alpha at least.
I started with this shader:
texture.vert:
varying vec3 position;
varying vec3 normal;
void main(void)
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
gl_FrontColor = gl_Color;
gl_TexCoord[0] = gl_MultiTexCoord0;
normal = normalize(gl_NormalMatrix * gl_Normal);
position = vec3(gl_ModelViewMatrix * gl_Vertex);
}
texture.frag:
uniform sampler2D Texture0;
uniform int ActiveLights;
varying vec3 position;
varying vec3 normal;
void main(void)
{
vec3 lightDir;
float attenFactor;
vec3 eyeDir = normalize(-position); // camera is at (0,0,0) in ModelView space
vec4 lightAmbientDiffuse = vec4(0.0,0.0,0.0,0.0);
vec4 lightSpecular = vec4(0.0,0.0,0.0,0.0);
// iterate all lights
for (int i=0; i<ActiveLights; ++i)
{
// attenuation and light direction
if (gl_LightSource[i].position.w != 0.0)
{
// positional light source
float dist = distance(gl_LightSource[i].position.xyz, position);
attenFactor = 1.0/( gl_LightSource[i].constantAttenuation +
gl_LightSource[i].linearAttenuation * dist +
gl_LightSource[i].quadraticAttenuation * dist * dist );
lightDir = normalize(gl_LightSource[i].position.xyz - position);
}
else
{
// directional light source
attenFactor = 1.0;
lightDir = gl_LightSource[i].position.xyz;
}
// ambient + diffuse
lightAmbientDiffuse += gl_FrontLightProduct[i].ambient*attenFactor;
lightAmbientDiffuse += gl_FrontLightProduct[i].diffuse * max(dot(normal, lightDir), 0.0) * attenFactor;
// specular
vec3 r = normalize(reflect(-lightDir, normal));
lightSpecular += gl_FrontLightProduct[i].specular *
pow(max(dot(r, eyeDir), 0.0), gl_FrontMaterial.shininess) *
attenFactor;
}
// compute final color
vec4 texColor = gl_Color * texture2D(Texture0, gl_TexCoord[0].xy);
gl_FragColor = texColor * (gl_FrontLightModelProduct.sceneColor + lightAmbientDiffuse) + lightSpecular;
float fog = (gl_Fog.end - gl_FogFragCoord) * gl_Fog.scale; // Intensität berechnen
fog = clamp(fog, 0.0, 1.0); // Beschneiden
gl_FragColor = mix(gl_Fog.color, gl_FragColor, fog); // Nebelfarbe einmischen
}
Comments are german because it's a german site where this code was posted, sorry.
But all this shader does is make everything very dark. No lighting effects at all - yet the shader codes compile. If I only use GL_LIGHT0 in the fragment shader, then it seems to work, but only reasonable for camera facing polygons and my floor polygon is just extremely dark. Also quads with RGBA textures show no sign of transparency.
I use standard glRotate/Translate for the Modelview matrix, and glVertex/Normal for my polygons. OpenGL lighting works fine apart from the fact that it looks ugly on very large surfaces. I triple checked my normals, they are fine.
Is there something wrong in the above code?
OR
Tell me why there is no generic lighting Shader for this actual task (point based light with distance falloff: a candle if you will) - shouldn't there be just one correct way to do this? I don't want bump/normal/parallax/toon/blur/whatever effects. I just want my lighting to perform better with larger polygons.
All Tutorials I found are only useful for lighting a single object when the camera is at 0,0,0 facing orthogonal to the object. The above is the only one found that at least looks like the thing I want to do.
I would strongly suggest you to read this article to see how the standard ADS lightning is done within GLSL.That is GL 4.0 but not a problem to adjust to your version:
Also you operate in the view (camera) space so DON"T negate the eyes vector :
vec3 eyeDir = normalize(-position);
I had pretty similar issues to yours because I also negated the eye vector forgetting that it is transformed into the view space.Your diffuse and specular calculations seem to be wrong too in the current scenario.In your place I wouldn't use data from the fixed pipeline at all ,otherwise what is the point doing it in a shader?
Here is the method to calculate diffuse and specular in the per fragment ADS point lightning:
void ads( int lightIndex,out vec3 ambAndDiff, out vec3 spec )
{
vec3 s = vec3(lights[lightIndex].Position - posOut) ;
vec3 v = normalize( posOut.xyz );
vec3 n = normalize(normOut);
vec3 h = normalize(v+s) ;// half vector (read in the web on what it is )
vec3 diffuse = ((Ka+ lights[lightIndex].Ld) * Kd * max( 0.0,dot(n, v) )) ;
spec = Ks * pow( max(0.0, dot(n,h) ), Shininess ) ;
ambAndDiff = diffuse ;
/// Ka-material ambient factor
/// Kd-material diffuse factor
/// Ks-material specular factor.
/// lights[lightIndex].Ld-lights diffuse factor;you may also add La and Ls if you want to have even more control of the light shading.
}
Also I wouldn't suggest you using the attenuation equation you have here,it is hard to control.If you want to add light radius based attenuation
there is this nice blog post: