Pixel based lighting is a common issue in many OpenGL applications, as the standard OpenGL lighting has very poor quality.
I want to use a GLSL program to have per-pixel based lighting in my OpenGL program instead of per-vertex. Just Diffuse lighting, but with fog, texture and texture-alpha at least.
I started with this shader:
texture.vert:
varying vec3 position;
varying vec3 normal;
void main(void)
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
gl_FrontColor = gl_Color;
gl_TexCoord[0] = gl_MultiTexCoord0;
normal = normalize(gl_NormalMatrix * gl_Normal);
position = vec3(gl_ModelViewMatrix * gl_Vertex);
}
texture.frag:
uniform sampler2D Texture0;
uniform int ActiveLights;
varying vec3 position;
varying vec3 normal;
void main(void)
{
vec3 lightDir;
float attenFactor;
vec3 eyeDir = normalize(-position); // camera is at (0,0,0) in ModelView space
vec4 lightAmbientDiffuse = vec4(0.0,0.0,0.0,0.0);
vec4 lightSpecular = vec4(0.0,0.0,0.0,0.0);
// iterate all lights
for (int i=0; i<ActiveLights; ++i)
{
// attenuation and light direction
if (gl_LightSource[i].position.w != 0.0)
{
// positional light source
float dist = distance(gl_LightSource[i].position.xyz, position);
attenFactor = 1.0/( gl_LightSource[i].constantAttenuation +
gl_LightSource[i].linearAttenuation * dist +
gl_LightSource[i].quadraticAttenuation * dist * dist );
lightDir = normalize(gl_LightSource[i].position.xyz - position);
}
else
{
// directional light source
attenFactor = 1.0;
lightDir = gl_LightSource[i].position.xyz;
}
// ambient + diffuse
lightAmbientDiffuse += gl_FrontLightProduct[i].ambient*attenFactor;
lightAmbientDiffuse += gl_FrontLightProduct[i].diffuse * max(dot(normal, lightDir), 0.0) * attenFactor;
// specular
vec3 r = normalize(reflect(-lightDir, normal));
lightSpecular += gl_FrontLightProduct[i].specular *
pow(max(dot(r, eyeDir), 0.0), gl_FrontMaterial.shininess) *
attenFactor;
}
// compute final color
vec4 texColor = gl_Color * texture2D(Texture0, gl_TexCoord[0].xy);
gl_FragColor = texColor * (gl_FrontLightModelProduct.sceneColor + lightAmbientDiffuse) + lightSpecular;
float fog = (gl_Fog.end - gl_FogFragCoord) * gl_Fog.scale; // Intensität berechnen
fog = clamp(fog, 0.0, 1.0); // Beschneiden
gl_FragColor = mix(gl_Fog.color, gl_FragColor, fog); // Nebelfarbe einmischen
}
Comments are german because it's a german site where this code was posted, sorry.
But all this shader does is make everything very dark. No lighting effects at all - yet the shader codes compile. If I only use GL_LIGHT0 in the fragment shader, then it seems to work, but only reasonable for camera facing polygons and my floor polygon is just extremely dark. Also quads with RGBA textures show no sign of transparency.
I use standard glRotate/Translate for the Modelview matrix, and glVertex/Normal for my polygons. OpenGL lighting works fine apart from the fact that it looks ugly on very large surfaces. I triple checked my normals, they are fine.
Is there something wrong in the above code?
OR
Tell me why there is no generic lighting Shader for this actual task (point based light with distance falloff: a candle if you will) - shouldn't there be just one correct way to do this? I don't want bump/normal/parallax/toon/blur/whatever effects. I just want my lighting to perform better with larger polygons.
All Tutorials I found are only useful for lighting a single object when the camera is at 0,0,0 facing orthogonal to the object. The above is the only one found that at least looks like the thing I want to do.
I would strongly suggest you to read this article to see how the standard ADS lightning is done within GLSL.That is GL 4.0 but not a problem to adjust to your version:
Also you operate in the view (camera) space so DON"T negate the eyes vector :
vec3 eyeDir = normalize(-position);
I had pretty similar issues to yours because I also negated the eye vector forgetting that it is transformed into the view space.Your diffuse and specular calculations seem to be wrong too in the current scenario.In your place I wouldn't use data from the fixed pipeline at all ,otherwise what is the point doing it in a shader?
Here is the method to calculate diffuse and specular in the per fragment ADS point lightning:
void ads( int lightIndex,out vec3 ambAndDiff, out vec3 spec )
{
vec3 s = vec3(lights[lightIndex].Position - posOut) ;
vec3 v = normalize( posOut.xyz );
vec3 n = normalize(normOut);
vec3 h = normalize(v+s) ;// half vector (read in the web on what it is )
vec3 diffuse = ((Ka+ lights[lightIndex].Ld) * Kd * max( 0.0,dot(n, v) )) ;
spec = Ks * pow( max(0.0, dot(n,h) ), Shininess ) ;
ambAndDiff = diffuse ;
/// Ka-material ambient factor
/// Kd-material diffuse factor
/// Ks-material specular factor.
/// lights[lightIndex].Ld-lights diffuse factor;you may also add La and Ls if you want to have even more control of the light shading.
}
Also I wouldn't suggest you using the attenuation equation you have here,it is hard to control.If you want to add light radius based attenuation
there is this nice blog post:
Related
Initial situation
I want to visualize simulation data in openGL.
My data consists of particle positions (x, y, z) where each particle has some properties (like density, temperature, ...) which will be used for coloring. Those (SPH) particles (100k to several millions), grouped together, actually represent planets, in case you wonder. I want to render those particles as small 3D spheres and add ambient, diffuse and specular lighting.
Status quo and questions
In MY case: In which coordinate frame do I do the lightning calculations? Which way is the "best" to pass the various components through the pipeline?
I saw that it is common to do it in view space which is also very intuitive. However: The normals at the different fragment positions are calculated in the fragment shader in clip space coordinates (see appended fragment shader). Can I actually convert them "back" into view space to do the lightning calculations in view space for all the fragments? Would there be any advantage compared to doing it in clip space?
It would be easier to get the normals in view space if I would use meshes for each sphere but I think with several million particles this would decrease performance drastically, so better do it with sphere intersection, would you agree?
PS: I don't need a model matrix since all the particles are already in place.
//VERTEX SHADER
#version 330 core
layout (location = 0) in vec3 position;
layout (location = 2) in float density;
uniform float radius;
uniform vec3 lightPos;
uniform vec3 viewPos;
out vec4 lightDir;
out vec4 viewDir;
out vec4 viewPosition;
out vec4 posClip;
out float vertexColor;
// transformation matrices
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
lightDir = projection * view * vec4(lightPos - position, 1.0f);
viewDir = projection * view * vec4(viewPos - position, 1.0f);
viewPosition = projection * view * vec4(lightPos, 1.0f);
posClip = projection * view * vec4(position, 1.0f);
gl_Position = posClip;
gl_PointSize = radius;
vertexColor = density;
}
I know that projective divion happens for the gl_Position variable, does that actually happen to ALL vec4's which are passed from the vertex to the fragment shader? If not, maybe the calculations in the fragment shader would be wrong?
And the fragment shader where the normals and diffuse/specular lightning calculations in clip space:
//FRAGMENT SHADER
#version 330 core
in float vertexColor;
in vec4 lightDir;
in vec4 viewDir;
in vec4 posClip;
in vec4 viewPosition;
uniform vec3 lightColor;
vec4 colormap(float x); // returns vec4(r, g, b, a)
out vec4 vFragColor;
void main(void)
{
// AMBIENT LIGHT
float ambientStrength = 0.0;
vec3 ambient = ambientStrength * lightColor;
// Normal calculation done in clip space (first from texture (gl_PointCoord 0 to 1) coord to NDC( -1 to 1))
vec3 normal;
normal.xy = gl_PointCoord * 2.0 - vec2(1.0); // transform from 0->1 point primitive coords to NDC -1->1
float mag = dot(normal.xy, normal.xy); // sqrt(x=1) = sqrt(x)
if (mag > 1.0) // discard fragments outside sphere
discard;
normal.z = sqrt(1.0 - mag); // because x^2 + y^2 + z^2 = 1
// DIFFUSE LIGHT
float diff = max(0.0, dot(vec3(lightDir), normal));
vec3 diffuse = diff * lightColor;
// SPECULAR LIGHT
float specularStrength = 0.1;
vec3 viewDir = normalize(vec3(viewPosition) - vec3(posClip));
vec3 reflectDir = reflect(-vec3(lightDir), normal);
float shininess = 64;
float spec = pow(max(dot(vec3(viewDir), vec3(reflectDir)), 0.0), shininess);
vec3 specular = specularStrength * spec * lightColor;
vFragColor = colormap(vertexColor / 8) * vec4(ambient + diffuse + specular, 1);
}
Now this actually "kind of" works but i have the feeling that also the sides of the sphere which do NOT face the light source are being illuminated, which shouldn't happen. How can I fix this?
Some weird effect: In this moment the light source is actually BEHIND the left planet (it just peaks out a little bit at the top left), bit still there are diffuse and specular effects going on. This side should be actually pretty dark! =(
Also at this moment I get some glError: 1282 error in the fragment shader and I don't know where it comes from since the shader program actually compiles and runs, any suggestions? :)
The things that you are drawing aren't actually spheres. They just look like them from afar. This is absolutely ok if you are fine with that. If you need geometrically correct spheres (with correct sizes and with a correct projection), you need to do proper raycasting. This seems to be a comprehensive guide on this topic.
1. What coordinate system?
In the end, it is up to you. The coordinate system just needs to fulfill some requirements. It must be angle-preserving (because lighting is all about angles). And if you need distance-based attenuation, it should also be distance-preserving. The world and the view coordinate systems usually fulfill these requirements. Clip space is not suited for lighting calculations as neither angles nor distances are preserved. Furthermore, gl_PointCoord is in none of the usual coordinate systems. It is its own coordinate system and you should only use it together with other coordinate systems if you know their relation.
2. Meshes or what?
Meshes are absolutely not suited to render spheres. As mentioned above, raycasting or some screen-space approximation are better choices. Here is an example shader that I used in my projects:
#version 330
out vec4 result;
in fData
{
vec4 toPixel; //fragment coordinate in particle coordinates
vec4 cam; //camera position in particle coordinates
vec4 color; //sphere color
float radius; //sphere radius
} frag;
uniform mat4 p; //projection matrix
void main(void)
{
vec3 v = frag.toPixel.xyz - frag.cam.xyz;
vec3 e = frag.cam.xyz;
float ev = dot(e, v);
float vv = dot(v, v);
float ee = dot(e, e);
float rr = frag.radius * frag.radius;
float radicand = ev * ev - vv * (ee - rr);
if(radicand < 0)
discard;
float rt = sqrt(radicand);
float lambda = max(0, (-ev - rt) / vv); //first intersection on the ray
float lambda2 = (-ev + rt) / vv; //second intersection on the ray
if(lambda2 < lambda) //if the first intersection is behind the camera
discard;
vec3 hit = lambda * v; //intersection point
vec3 normal = (frag.cam.xyz + hit) / frag.radius;
vec4 proj = p * vec4(hit, 1); //intersection point in clip space
gl_FragDepth = ((gl_DepthRange.diff * proj.z / proj.w) + gl_DepthRange.near + gl_DepthRange.far) / 2.0;
vec3 vNormalized = -normalize(v);
float nDotL = dot(vNormalized, normal);
vec3 c = frag.color.rgb * nDotL + vec3(0.5, 0.5, 0.5) * pow(nDotL, 120);
result = vec4(c, frag.color.a);
}
3. Perspective division
Perspective division is not applied to your attributes. The GPU does perspective division on the data that you pass via gl_Position on the way to transforming them to screen space. But you will never actually see this perspective-divided position unless you do it yourself.
4. Light in the dark
This might be the result of you mixing different coordinate systems or doing lighting calculations in clip space. Btw, the specular part is usually not multiplied by the material color. This is light that gets reflected directly at the surface. It does not penetrate the surface (which would absorb some colors depending on the material). That's why those highlights are usually white (or whatever light color you have), even on black objects.
First of all, I must apologize for posting yet another question on this subject (there are a lot already!). I did search for other related questions and answers, but unfortunately none of them showed me the solution. Now I'm desperate! :D
It is worth mentioning that the code posted below gives a satisfying 'bumpy' effect. It is the scene enlightenment that seems to be wrong.
The scene: is dead simple! A cube in the center, a light source rotating around it (parallel to the ground) and above.
My approach is to start from my basic light shader, which gives me adequate outputs (or so I think!). The first step is to modify it to do the calculations in tangent space, then use the normal extracted from a texture.
I tried to comment the code nicely, but in short I have two questions:
1) Doing only basic lighting (no normal mapping), I expect the scene to look exactly the same, with or without transforming my vectors into tangent space with the TBN matrix. Am I wrong?
2) Why do I get incorrect enlightenment?
A couple of screenshots to give you an idea (EDITED) - following LJ's comment, I am no longer summing normals and tangent per vertex/face. Interestingly, it highlights the issue (see on the capture, I have marked how the light moves).
Basically it is as if the cube was rotated 90 degrees to the left, or, as if the light was turing vertically instead of horizontally
Result with normal mapping:
Version with simple light:
Vertex shader:
// Information about the light.
// Here we care essentially about light.Position, which
// is set to be something like vec3(cos(x)*9, 5, sin(x)*9)
uniform Light_t Light;
uniform mat4 W; // The model transformation matrix
uniform mat4 V; // The camera transformation matrix
uniform mat4 P; // The projection matrix
in vec3 VS_Position;
in vec4 VS_Color;
in vec2 VS_TexCoord;
in vec3 VS_Normal;
in vec3 VS_Tangent;
out vec3 FS_Vertex;
out vec4 FS_Color;
out vec2 FS_TexCoord;
out vec3 FS_LightPos;
out vec3 FS_ViewPos;
out vec3 FS_Normal;
// This method calculates the TBN matrix:
// I'm sure it is not optimized vertex shader code,
// to have this seperate method, but nevermind for now :)
mat3 getTangentMatrix()
{
// Note: here I must say am a bit confused, do I need to transform
// with 'normalMatrix'? In practice, it seems to make no difference...
mat3 normalMatrix = transpose(inverse(mat3(W)));
vec3 norm = normalize(normalMatrix * VS_Normal);
vec3 tang = normalize(normalMatrix * VS_Tangent);
vec3 btan = normalize(normalMatrix * cross(VS_Normal, VS_Tangent));
tang = normalize(tang - dot(tang, norm) * norm);
return transpose(mat3(tang, btan, norm));
}
void main()
{
// Set the gl_Position and pass color + texcoords to the fragment shader
gl_Position = (P * V * W) * vec4(VS_Position, 1.0);
FS_Color = VS_Color;
FS_TexCoord = VS_TexCoord;
// Now here we start:
// This is where supposedly, multiplying with the TBN should not
// change anything to the output, as long as I apply the transformation
// to all of them, or none.
// Typically, removing the 'TBN *' everywhere (and not using the normal
// texture later in the fragment shader) is exactly the code I use for
// my basic light shader.
mat3 TBN = getTangentMatrix();
FS_Vertex = TBN * (W * vec4(VS_Position, 1)).xyz;
FS_LightPos = TBN * Light.Position;
FS_ViewPos = TBN * inverse(V)[3].xyz;
// This line is actually not needed when using the normal map:
// I keep the FS_Normal variable for comparison purposes,
// when I want to switch to my basic light shader effect.
// (see later in fragment shader)
FS_Normal = TBN * normalize(transpose(inverse(mat3(W))) * VS_Normal);
}
And the fragment shader:
struct Textures_t
{
int SamplersCount;
sampler2D Samplers[4];
};
struct Light_t
{
int Active;
float Ambient;
float Power;
vec3 Position;
vec4 Color;
};
uniform mat4 W;
uniform mat4 V;
uniform Textures_t Textures;
uniform Light_t Light;
in vec3 FS_Vertex;
in vec4 FS_Color;
in vec2 FS_TexCoord;
in vec3 FS_LightPos;
in vec3 FS_ViewPos;
in vec3 FS_Normal;
out vec4 frag_Output;
vec4 getPixelColor()
{
return Textures.SamplersCount >= 1
? texture2D(Textures.Samplers[0], FS_TexCoord)
: FS_Color;
}
vec3 getTextureNormal()
{
// FYI: the normal texture is always at index 1
vec3 bump = texture(Textures.Samplers[1], FS_TexCoord).xyz;
bump = 2.0 * bump - vec3(1.0, 1.0, 1.0);
return normalize(bump);
}
vec4 getLightColor()
{
// This is the one line that changes between my basic light shader
// and the normal mapping one:
// - If I don't do 'TBN *' earlier and use FS_Normal here,
// the enlightenment seems fine (see second screenshot)
// - If I do multiply by TBN (including on FS_Normal), I would expect
// the same result as without multiplying ==> not the case: it looks
// very similar to the result with normal mapping
// (just has no bumpy effect of course)
// - If I use the normal texture (along with TBN of course), then I get
// the result you see in the first screenshot.
vec3 N = getTextureNormal(); // Instead of 'normalize(FS_Normal);'
// Everything from here on is the same as my basic light shader
vec3 L = normalize(FS_LightPos - FS_Vertex);
vec3 E = normalize(FS_ViewPos - FS_Vertex);
vec3 R = normalize(reflect(-L, N));
// Ambient color: light color times ambient factor
vec4 ambient = Light.Color * Light.Ambient;
// Diffuse factor: product of Normal to Light vectors
// Diffuse color: light color times the diffuse factor
float dfactor = max(dot(N, L), 0);
vec4 diffuse = clamp(Light.Color * dfactor, 0, 1);
// Specular factor: product of reflected to camera vectors
// Note: applies only if the diffuse factor is greater than zero
float sfactor = 0.0;
if(dfactor > 0)
{
sfactor = pow(max(dot(R, E), 0.0), 8.0);
}
// Specular color: light color times specular factor
vec4 specular = clamp(Light.Color * sfactor, 0, 1);
// Light attenuation: square of the distance moderated by light's power factor
float atten = 1 + pow(length(FS_LightPos - FS_Vertex), 2) / Light.Power;
// The fragment color is a factor of the pixel and light colors:
// Note: attenuation only applies to diffuse and specular components
return getPixelColor() * (ambient + (diffuse + specular) / atten);
}
void main()
{
frag_Output = Light.Active == 1
? getLightColor()
: getPixelColor();
}
That's it! I hope you have enough information and of course, your help will be greatly appreciated! :) Take care.
I am experiancing a very similar problem, and i can not explain why the lighting doesn't work right, but i can answer your first question and at the very least explain how i somehow got lighting working acceptably (though your problem may not necesarrily be the same is mine).
Firstly in theory if you tangents and bitangents are calculated correctly, then you should get exactly the same lighting result when doing the calculation in tangentspace with a tangentspace normal [0,0,1].
Secondly while it is common knowledge that you should transform your normals from model to cameraspace by multiplying by inverse transpose model-view matrix as explained by this tutorial, i found that the problem with the lighting being transformed wrong can be solved if you transform the normal tangent by the model-view matrix rather than the inverse transpose model-view. Ie use normalMatrix = mat3(W); instead of normalMatrix = transpose(inverse(mat3(W)));.
In my case this did »fix« the problems with the light, but i don't know why this fixed it, but i make no guarantee that it does not (in fact i assume that it does) introduce other problems with the shading
I implemented a simple shader for the lighting; it kind of works, but the light seems to move when the camera rotates (and only when it rotates).
I'm experimenting with a spotlight, this is how it looks like (it's the spot in the center):
If now I rotate the camera, the spot moves around; for example, here I looked down (I didn't move at all, just looked down) and it seemed at my feet:
I've looked it up and I've seen that it's a common mistake when mixing reference systems in the shader and/or when setting the light's position before moving the camera.
The thing is, I'm pretty sure I'm not doing these two things, but apparently I'm wrong; it's just that I can't find the bug.
Here's the shader:
Vertex Shader
varying vec3 vertexNormal;
varying vec3 lightDirection;
void main()
{
vertexNormal = gl_NormalMatrix * gl_Normal;
lightDirection = vec3(gl_LightSource[0].position.xyz - (gl_ModelViewMatrix * gl_Vertex).xyz);
gl_Position = ftransform();
}
Fragment Shader
uniform vec3 ambient;
uniform vec3 diffuse;
uniform vec3 specular;
uniform float shininess;
varying vec3 vertexNormal;
varying vec3 lightDirection;
void main()
{
vec3 color = vec3(0.0, 0.0, 0.0);
vec3 lightDirNorm;
vec3 eyeVector;
vec3 half_vector;
float diffuseFactor;
float specularFactor;
float attenuation;
float lightDistance;
vec3 normalDirection = normalize(vertexNormal);
lightDirNorm = normalize(lightDirection);
eyeVector = vec3(0.0, 0.0, 1.0);
half_vector = normalize(lightDirNorm + eyeVector);
diffuseFactor = max(0.0, dot(normalDirection, lightDirNorm));
specularFactor = max(0.0, dot(normalDirection, half_vector));
specularFactor = pow(specularFactor, shininess);
color += ambient * gl_LightSource[0].ambient;
color += diffuseFactor * diffuse * gl_LightSource[0].diffuse;
color += specularFactor * specular * gl_LightSource[0].specular;
lightDistance = length(lightDirection[i]);
float constantAttenuation = 1.0;
float linearAttenuation = (0.02 / SCALE_FACTOR) * lightDistance;
float quadraticAttenuation = (0.0 / SCALE_FACTOR) * lightDistance * lightDistance;
attenuation = 1.0 / (constantAttenuation + linearAttenuation + quadraticAttenuation);
// If it's a spotlight
if(gl_LightSource[i].spotCutoff <= 90.0)
{
float spotEffect = dot(normalize(gl_LightSource[0].spotDirection), normalize(-lightDirection));
if (spotEffect > gl_LightSource[0].spotCosCutoff)
{
spotEffect = pow(spotEffect, gl_LightSource[0].spotExponent);
attenuation = spotEffect / (constantAttenuation + linearAttenuation + quadraticAttenuation);
}
else
attenuation = 0.0;
}
color = color * attenuation;
// Moltiplico il colore per il fattore di attenuazione
gl_FragColor = vec4(color, 1.0);
}
Now, I can't show you the code where I render the things, because it's a custom language which integrates opengl and it's designed to create 3D applications (it wouldn't help to show you); but what I do is something like this:
SetupLights();
UpdateCamera();
RenderStuff();
Where:
SetupLights contains actual opengl calls that setup the lights and their positions;
UpdateCamera updates the camera's position using the built-in classes of the language; I don't have much power here;
RenderStuff calls the built-in functions of the language to draw the scene; I don't have much power here either.
So, either I'm doing something wrong in the shader or there's something in the language that "behind the scenes" breaks things.
Can you point me in the right direction?
you wrote
the light's position is already in world coordinates, and that is where I'm doing the computations
however, since you're applying gl_ModelViewMatrix to your vertex and gl_NormalMatrix to your normal, these values are probably in view space, which might cause the moving light.
as an aside, your eye vector looks like it should be in view coordinates, however, view space is a right-handed coordinate system, so "forward" points along the negative z-axis. also, your specular computation will likely be off since you're using the same eye vector for all fragments, but it should probably point towards that fragment's position on the near/far planes.
I've been trying to make a basic static point light using shaders for an LWJGL game, but it appears as if the light is moving as the camera's position is being translated and rotated. These shaders are slightly modified from the OpenGL 4.3 guide, so I'm not sure why they aren't working as intended. Can anyone explain why these shaders aren't working as intended and what I can do to get them to work?
Vertex Shader:
varying vec3 color, normal;
varying vec4 vertexPos;
void main() {
color = vec3(0.4);
normal = normalize(gl_NormalMatrix * gl_Normal);
vertexPos = gl_ModelViewMatrix * gl_Vertex;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
Fragment Shader:
varying vec3 color, normal;
varying vec4 vertexPos;
void main() {
vec3 lightPos = vec3(4.0);
vec3 lightColor = vec3(0.75);
vec3 lightDir = lightPos - vertexPos.xyz;
float lightDist = length(lightDir);
float attenuation = 1.0 / (3.0 + 0.007 * lightDist + 0.000008 * lightDist * lightDist);
float diffuse = max(0.0, dot(normal, lightDir));
vec3 ambient = vec3(0.4, 0.4, 0.4);
vec3 finalColor = color * (ambient + lightColor * diffuse * attenuation);
gl_FragColor = vec4(finalColor, 1.0);
}
If anyone's interested, I ended up finding the solution. Removing the calls to gl_NormalMatrix and gl_ModelViewMatrix solved the problem.
The critical value here, lightPos, was being set as a function of vertexPos, which you have expressed in screen space (this happened because its original world space form was multiplied by modelView). Screen space stays with the camera, not anything in the 3D world. So to have a non-moving light source with respect to some absolute point in world space (like [4.0, 4.0, 4.0]), you could just leave your object's points in that space as you found out.
But getting rid of modelview is not a good idea, since the whole point of the model matrix is to place your objects where they belong (so you can re-use your vertex arrays with changes only to the model matrix, instead of burdening them with specifying every single shape's vertex positions from scratch).
A better way is to perform the modelView multiplication on both vertexPos AND lightPos. This way you're treating lightPos as originally a quantity in world space, but then doing the comparison in screen space. The math to get light intensities from normals will work out to the same in either space and you'll get a correct looking light source.
I'm trying to implement phong shading in GLSL but am having some issues with the specular component.
The green light is the specular component. The light (a point light) travels in a circle above the plane. The specular highlight always points inward toward the Y axis about which the light rotates and fans out toward the diffuse reflection as seen in the image. It doesn't appear to be affected at all by the positioning of the camera and I'm not sure where I'm going wrong.
Vertex shader code:
#version 330 core
/*
* Phong Shading with with Point Light (Quadratic Attenutation)
*/
//Input vertex data
layout(location = 0) in vec3 vertexPosition_modelSpace;
layout(location = 1) in vec2 vertexUVs;
layout(location = 2) in vec3 vertexNormal_modelSpace;
//Output Data; will be interpolated for each fragment
out vec2 uvCoords;
out vec3 vertexPosition_cameraSpace;
out vec3 vertexNormal_cameraSpace;
//Uniforms
uniform mat4 mvMatrix;
uniform mat4 mvpMatrix;
uniform mat3 normalTransformMatrix;
void main()
{
vec3 normal = normalize(vertexNormal_modelSpace);
//Set vertices in clip space
gl_Position = mvpMatrix * vec4(vertexPosition_modelSpace, 1);
//Set output for UVs
uvCoords = vertexUVs;
//Convert vertex and normal into eye space
vertexPosition_cameraSpace = mat3(mvMatrix) * vertexPosition_modelSpace;
vertexNormal_cameraSpace = normalize(normalTransformMatrix * normal);
}
Fragment Shader Code:
#version 330 core
in vec2 uvCoords;
in vec3 vertexPosition_cameraSpace;
in vec3 vertexNormal_cameraSpace;
//out
out vec4 fragColor;
//uniforms
uniform sampler2D diffuseTex;
uniform vec3 lightPosition_cameraSpace;
void main()
{
const float materialAmbient = 0.025; //a touch of ambient
const float materialDiffuse = 0.65;
const float materialSpec = 0.35;
const float lightPower = 2.0;
const float specExponent = 2;
//--------------Set Colors and determine vectors needed for shading-----------------
//reflection colors- NOTE- diffuse and ambient reflections will use the texture color
const vec3 colorSpec = vec3(0,1,0); //Green spec color
vec3 diffuseColor = texture2D(diffuseTex, uvCoords).rgb; //Get color from the texture at fragment
const vec3 lightColor = vec3(1,1,1); //White light
//Re-normalize normal vectors : after interpolation they make not be unit length any longer
vec3 normVertexNormal_cameraSpace = normalize(vertexNormal_cameraSpace);
//Set camera vec
vec3 viewVec_cameraSpace = normalize(-vertexPosition_cameraSpace); //Since its view space, camera at origin
//Set light vec
vec3 lightVec_cameraSpace = normalize(lightPosition_cameraSpace - vertexPosition_cameraSpace);
//Set reflect vect
vec3 reflectVec_cameraSpace = normalize(reflect(-lightVec_cameraSpace, normVertexNormal_cameraSpace)); //reflect function requires incident vec; from light to vertex
//----------------Find intensity of each component---------------------
//Determine Light Intensity
float distance = abs(length(lightPosition_cameraSpace - vertexPosition_cameraSpace));
float lightAttenuation = 1.0/( (distance > 0) ? (distance * distance) : 1 ); //Quadratic
vec3 lightIntensity = lightPower * lightAttenuation * lightColor;
//Determine Ambient Component
vec3 ambientComp = materialAmbient * diffuseColor * lightIntensity;
//Determine Diffuse Component
float lightDotNormal = max( dot(lightVec_cameraSpace, normVertexNormal_cameraSpace), 0.0 );
vec3 diffuseComp = materialDiffuse * diffuseColor * lightDotNormal * lightIntensity;
vec3 specComp = vec3(0,0,0);
//Determine Spec Component
if(lightDotNormal > 0.0)
{
float reflectDotView = max( dot(reflectVec_cameraSpace, viewVec_cameraSpace), 0.0 );
specComp = materialSpec * colorSpec * pow(reflectDotView, specExponent) * lightIntensity;
}
//Add Ambient + Diffuse + Spec
vec3 phongFragRGB = ambientComp +
diffuseComp +
specComp;
//----------------------Putting it together-----------------------
//Out Frag color
fragColor = vec4( phongFragRGB, 1);
}
Just noting that the normalTransformMatrix seen in the Vertex shader is the inverse-transpose of the model-view matrix.
I am setting a vector from the vertex position to the light, to the camera, and the reflect vector, all in camera space. For the diffuse calculation I am taking the dot product of the light vector and the normal vector, and for the specular component I am taking the dot product of the reflection vector and the view vector. Perhaps there is some fundamental misunderstanding that I have with the algorithm?
I thought at first that the problem could be that I wasn't normalizing the normals entering the fragment shader after interpolation, but adding a line to normalize didn't affect the image. I'm not sure where to look.
I know that there a lot of phong shading questions on the site, but everyone seems to have a problem that is a bit different. If anyone can see where I am going wrong, please let me know. Any help is appreciated.
EDIT: Okay its working now! Just as jozxyqk suggested below, I needed to do a mat4*vec4 operation for my vertex position or lose the translation information. When I first made the change I was getting strange results until I realized that I was making the same mistake in my OpenGL code for the lightPosition_cameraSpace before I passed it to the shader (the mistake being that I was casting down the view matrix to a mat3 for the calculation instead of setting the light position vector as a vec4). Once I edited those lines the shader appears to be working properly! Thanks for the help, jozxqk!
I can see two parts which don't look right.
"vertexPosition_cameraSpace = mat3(mvMatrix) * vertexPosition_modelSpace" should be a mat4/vec4(x,y,z,1) multiply, otherwise it ignores the translation part of the modelview matrix.
2. distance uses the light position relative to the camera and not the vertex. Use lightVec_cameraSpace instead. (edit: missed the duplicated calculation)