Elliptical gradient rotation in GLSL - opengl

I implemented basic elliptical gradient in GLSL and it is working fine. However I failed rotating the gradient. My code is below:
vertex shader
uniform mat4 camera;
uniform mat4 model;
in vec3 vert; // coordinates of vertex
in vec2 vertTexCoord; //pseudo texture coordinates, used for calculating relative fragment position
out vec2 fragTexCoord;
void main() {
fragTexCoord = vertTexCoord; //pass to fragment shader
gl_Position = camera * model * vec4(vert, 1); //apply transformations
}
fragment shader
uniform vec2 gradientCenter; //center of gradient
uniform vec2 gradientDimensions; //how far gradient goes in right and up direction respectively
uniform vec2 gradientDirection; //rotation of gradient, not used..yet
in vec2 fragTexCoord;
out vec4 finalColor;
void main() {
vec2 gradient = gradientCenter - fragTexCoord; //gradient itself
gradient.x = gradient.x * (1.0 / gradientDimensions.x); //relative scale on right direction, currently X axis
gradient.y = gradient.y * (1.0 / gradientDimensions.y); //relative scale on up direction, currently Y axis
float distanceFromLight = length(gradient); //lenght determines output color
finalColor = mix(vec4(1.0, 0.0, 0.0, 1.0), vec4(0.0, 0.0, 1.0, 1.0), distanceFromLight * 2); //mixing red and blue, placeholder colors
}
to better illustrate, in the upper picture is what I have and is working, in the lower picture what is my goal. How to improve my code to allow elliptical gradient manipulation as shown on lower picture?

I assume that gradientDirection is the normalized direction of the first principal axis. Then you can calculate the coordinates in the local system with the dot product:
vec2 secondaryPrincipal = vec2(gradientDirection.y, -gradientDirection.x);
vec2 gradient = gradientCenter - fragTexCoord; //gradient itself
vec2 localGradient(dot(gradient, gradientDirection) * (1.0 / gradientDimensions.x),
dot(gradient, secondaryPrincipal) * (1.0 / gradientDimensions.y));
float distanceFromLight = length(localGradient);
//...

Related

Problem drawing a circle based on GL_Points using Open GL ES 3.0

Based on the accepted answer for the question Is there a way to draw a circle with the fragment shader at the position of a point from the vertex shader?, I am having the following results when I try to draw 3 circles based on the points (0.0, -0.75, 0.0), (0.0, 0.0, 0.0) and (0.0, 0.75, 0.0). Both window and viewport are 418x418.
The question is: Why only the center point is rendered properly, while the bottom circle is stretched and the top circle is shrinked, both on the Y-axis?
Vertex Shader
precision highp float;
attribute vec4 vPosition;
varying vec2 pointPos;
uniform vec2 windowSize; // = (window-width, window-height)
void main()
{
gl_Position = vPosition;
gl_PointSize = 900.0;
vec2 ndcPos = gl_Position.xy / gl_Position.w;
pointPos = windowSize * (ndcPos*0.5 + 0.5);
}
Fragment Shader
precision highp float;
varying vec2 pointPos;
uniform vec4 fColor;
const float threshold = 0.3;
const float aRadius = 10.0;
void main()
{
float dist = distance(pointPos, gl_FragCoord.xy);
if (dist > aRadius)
discard;
float d = dist / aRadius;
vec3 color = mix(fColor.rgb, vec3(0.0), step(1.0-threshold, d));
gl_FragColor = vec4(color, 1.0);
}
I am struggling to understand why the top and bottom circles are not being rendered properly, but I could not figure it out yet.

GLSL: Fade 2D grid based on distance from camera

I am currently trying to draw a 2D grid on a single quad using only shaders. I am using SFML as the graphics library and sf::View to control the camera. So far I have been able to draw an anti-aliased multi level grid. The first level (blue) outlines a chunk and the second level (grey) outlines the tiles within a chunk.
I would now like to fade grid levels based on the distance from the camera. For example, the chunk grid should fade in as the camera zooms in. The same should be done for the tile grid after the chunk grid has been completely faded in.
I am not sure how this could be implemented as I am still new to OpenGL and GLSL. If anybody has any pointers on how this functionality can be implemented, please let me know.
Vertex Shader
#version 130
out vec2 texCoords;
void main() {
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
texCoords = (gl_TextureMatrix[0] * gl_MultiTexCoord0).xy;
}
Fragment Shader
#version 130
uniform vec2 chunkSize = vec2(64.0, 64.0);
uniform vec2 tileSize = vec2(16.0, 16.0);
uniform vec3 chunkBorderColor = vec3(0.0, 0.0, 1.0);
uniform vec3 tileBorderColor = vec3(0.5, 0.5, 0.5);
uniform bool drawGrid = true;
in vec2 texCoords;
void main() {
vec2 uv = texCoords.xy * chunkSize;
vec3 color = vec3(1.0, 1.0, 1.0);
if(drawGrid) {
float aa = length(fwidth(uv));
vec2 halfChunkSize = chunkSize / 2.0;
vec2 halfTileSize = tileSize / 2.0;
vec2 a = abs(mod(uv - halfChunkSize, chunkSize) - halfChunkSize);
vec2 b = abs(mod(uv - halfTileSize, tileSize) - halfTileSize);
color = mix(
color,
tileBorderColor,
smoothstep(aa, .0, min(b.x, b.y))
);
color = mix(
color,
chunkBorderColor,
smoothstep(aa, .0, min(a.x, a.y))
);
}
gl_FragColor.rgb = color;
gl_FragColor.a = 1.0;
}
You need to split your multiplication in the vertex shader to two parts:
// have a variable to be interpolated per fragment
out vec2 vertex_coordinate;
...
{
// this will store the coordinates of the vertex
// before its projected (i.e. its "world" coordinates)
vertex_coordinate = gl_ModelViewMatrix * gl_Vertex;
// get your projected vertex position as before
gl_Position = gl_ProjectionMatrix * vertex_coordinate;
...
}
Then in the fragment shader you change the color based on the world vertex coordinate and the camera position:
in vec2 vertex_coordinate;
// have to update this value, every time your camera changes its position
uniform vec2 camera_world_position = vec2(64.0, 64.0);
...
{
...
// calculate the distance from the fragment in world coordinates to the camera
float fade_factor = length(camera_world_position - vertex_coordinate);
// make it to be 1 near the camera and 0 if its more then 100 units.
fade_factor = clamp(1.0 - fade_factor / 100.0, 0.0, 1.0);
// update your final color with this factor
gl_FragColor.rgb = color * fade_factor;
...
}
The second way to do it is to use the projected coordinate's w. I personally prefer to calculate the distance in units of space. I did not test this code, it might have some trivial syntax errors, but if you understand the idea, you can apply it in any other way.

Incorrect tracing with SSLR (Screen Space Local Reflections)

While implementing SSLR, I ran into the problem of incorrectly displaying objects: they are infinitely projected "down" and displayed in no way at all in the mirror. I give the code and screenshot below.
Fragment SSLR shader:
#version 330 core
uniform sampler2D normalMap; // in view space
uniform sampler2D depthMap; // in view space
uniform sampler2D colorMap;
uniform sampler2D reflectionStrengthMap;
uniform mat4 projection;
uniform mat4 inv_projection;
in vec2 texCoord;
layout (location = 0) out vec4 fragColor;
vec3 calcViewPosition(in vec2 texCoord) {
// Combine UV & depth into XY & Z (NDC)
vec3 rawPosition = vec3(texCoord, texture(depthMap, texCoord).r);
// Convert from (0, 1) range to (-1, 1)
vec4 ScreenSpacePosition = vec4(rawPosition * 2 - 1, 1);
// Undo Perspective transformation to bring into view space
vec4 ViewPosition = inv_projection * ScreenSpacePosition;
// Perform perspective divide and return
return ViewPosition.xyz / ViewPosition.w;
}
vec2 rayCast(vec3 dir, inout vec3 hitCoord, out float dDepth) {
dir *= 0.25f;
for (int i = 0; i < 20; i++) {
hitCoord += dir;
vec4 projectedCoord = projection * vec4(hitCoord, 1.0);
projectedCoord.xy /= projectedCoord.w;
projectedCoord.xy = projectedCoord.xy * 0.5 + 0.5;
float depth = calcViewPosition(projectedCoord.xy).z;
dDepth = hitCoord.z - depth;
if(dDepth < 0.0) return projectedCoord.xy;
}
return vec2(-1.0);
}
void main() {
vec3 normal = texture(normalMap, texCoord).xyz * 2.0 - 1.0;
vec3 viewPos = calcViewPosition(texCoord);
// Reflection vector
vec3 reflected = normalize(reflect(normalize(viewPos), normalize(normal)));
// Ray cast
vec3 hitPos = viewPos;
float dDepth;
float minRayStep = 0.1f;
vec2 coords = rayCast(reflected * max(minRayStep, -viewPos.z), hitPos, dDepth);
if (coords != vec2(-1.0)) fragColor = mix(texture(colorMap, texCoord), texture(colorMap, coords), texture(reflectionStrengthMap, texCoord).r);
else fragColor = texture(colorMap, texCoord);
}
Screenshot:
Also, the lamp is not reflected at all
I will grateful for help
UPDATE:
colorMap:
normalMap:
depthMap:
UPDATE: I solved the problem with the wrong reflection, but there are still problems.
I solved it as follows: ViewPosition.y *= -1
Now, as you can see in the screenshot, the lower parts of the objects are not reflected for some reason.
The question still remains open.
I m struggling to get a fine ssr too. I found two things that could help.
To get the view space normals you have to keep only the rotation of the camera and remove the translation, because if you dont, you will get the normals stretched to the opposite direction of the camera movement and will no longer have the right direction even if you normalize them again, for column major mat4 you can do it like:
mat4 viewNoTranslation = view;
viewNoTranslation[3] = vec4(0.0, 0.0, 0.0, 1.0);
The depth sampling from the depth image is logarithmic and if you linearize it you will get indeed the values from 0 to 1 but they will be inaccurate as to the needed precision. I tried to get the depth value straight from the vertex shader:
gl_Position = ubo.projection * ubo.view * ubo.model * inPos;
depth = gl_Position.z;
I dont know if it is right but the depth now is more accurate.
If you make proggress, please update :)

Point lighting with shadow mapping and camera moving

Faced a problem when trying to create a spotlight in my scene. The problem is that my camera is moving around the scene, and because of this, there is something wrong with the lighting. In addition, I see only a black screen. I understand that I missed the transformation somewhere, or did some extra, but where - I really do not know.
Below is the code for my shaders.
Fragment shader:
#version 330 core
precision mediump float; // Set the default precision to medium. We don't need as high of a
// precision in the fragment shader.
#define MAX_LAMPS_COUNT 8 // Max lamps count.
uniform vec3 u_ViewPos; // Camera position
uniform int u_LampsCount; // Lamps count
uniform int u_ShadowMapWidth = 1024; // shadow map width / default is 1024
uniform int u_ShadowMapHeight = 1024; // shadow map height / default is 1024
uniform float brightnessThreshold = 0.5; // brightness threshold variable
uniform float far_plane = 16;
varying mat4 v_MVMatrix; // Model View matrix
varying mat3 v_TBN; // Tangent Bitangent Normal matrix
varying vec4 v_Position; // Position for this fragment.
varying vec3 v_Normal; // Interpolated normal for this fragment.
varying vec2 v_Texture; // Texture coordinates.
varying float v_NormalMapping; // Is normal mapping enabled 0 - false, 1 - true
struct Lamp {
float ambientStrength;
float diffuseStrength;
float specularStrength;
float kc; // constant term
float kl; // linear term
float kq; // quadratic term
int shininess;
vec3 lampPos; // in eye space, cameraViewMatrix * lamp world coordinates
vec3 lampColor;
};
uniform samplerCube shadowMaps[MAX_LAMPS_COUNT];
uniform struct Mapping {
sampler2D ambient;
sampler2D diffuse;
sampler2D specular;
sampler2D normal;
} u_Mapping;
uniform Lamp u_Lamps[MAX_LAMPS_COUNT];
vec3 norm;
vec3 fragPos;
float shadow;
// output colors
layout(location = 0) out vec4 fragColor;
layout(location = 1) out vec4 fragBrightColor;
float calculateShadow(int textureIndex, vec3 lightPos) {
// get vector between fragment position and light position
vec3 fragToLight = fragPos - lightPos;
// use the light to fragment vector to sample from the depth map
float closestDepth = texture(shadowMaps[textureIndex], fragToLight).r;
// it is currently in linear range between [0,1]. Re-transform back to original value
closestDepth *= far_plane;
// now get current linear depth as the length between the fragment and light position
float currentDepth = length(fragToLight);
// now test for shadows
float bias = 0.05;
float shadow = currentDepth - bias > closestDepth ? 1.0 : 0.0;
//fragColor = vec4(vec3(closestDepth / far_plane), 1.0); // visualization
return shadow;
}
float calculateAttenuation(Lamp lamp) {
float distance = length(lamp.lampPos - fragPos);
return 1.0 / (
lamp.kc +
lamp.kl * distance +
lamp.kq * (distance * distance)
);
}
vec4 toVec4(vec3 v) {
return vec4(v, 1);
}
// The entry point for our fragment shader.
void main() {
// Transform the vertex into eye space.
fragPos = vec3(v_MVMatrix * v_Position);
vec3 viewDir = normalize(u_ViewPos - fragPos);
if (v_NormalMapping == 0) norm = vec3(normalize(v_MVMatrix * vec4(v_Normal, 0)));
else { // using normal map if normal mapping enabled
norm = texture2D(u_Mapping.normal, v_Texture).rgb;
norm = normalize(norm * 2.0 - 1.0); // from [0; 1] to [-1; -1]
norm = normalize(v_TBN * norm);
}
vec3 ambientResult = vec3(0, 0, 0); // result of ambient lighting for all lamps
vec3 diffuseResult = vec3(0, 0, 0); // result of diffuse lighting for all lamps
vec3 specularResult = vec3(0, 0, 0); // result of specular lighting for all lamps
for (int i = 0; i<u_LampsCount; i++) {
// attenuation
float attenuation = calculateAttenuation(u_Lamps[i]);
// ambient
vec3 ambient = u_Lamps[i].ambientStrength * u_Lamps[i].lampColor * attenuation;
// diffuse
vec3 lightDir = normalize(u_Lamps[i].lampPos - fragPos);
float diff = max(dot(norm, lightDir), 0.0);
vec3 diffuse = u_Lamps[i].diffuseStrength * diff * u_Lamps[i].lampColor * attenuation;
// specular
vec3 reflectDir = reflect(-lightDir, norm);
float spec = pow(max(dot(viewDir, reflectDir), 0.0), u_Lamps[i].shininess);
vec3 specular = u_Lamps[i].specularStrength * spec * u_Lamps[i].lampColor * attenuation;
// fragment position in light space
//fragLightSpacePos = u_Lamps[i].lightSpaceMatrix * u_Lamps[i].lightModelMatrix * v_Position;
// calculate shadow
shadow = calculateShadow(i, u_Lamps[i].lampPos);
// result for this(i) lamp
ambientResult += ambient;
diffuseResult += diffuse * (1-shadow);
specularResult += specular * (1-shadow);
}
fragColor =
toVec4(ambientResult) * texture2D(u_Mapping.ambient, v_Texture) +
toVec4(diffuseResult) * texture2D(u_Mapping.diffuse, v_Texture) +
toVec4(specularResult) * texture2D(u_Mapping.specular, v_Texture);
// brightness calculation
//float brightness = dot(fragColor.rgb, vec3(0.2126, 0.7152, 0.0722));
//if (brightness > brightnessThreshold) fragBrightColor = vec4(fragColor.rgb, 1.0);
fragBrightColor = vec4(0, 0, 0, 1);
}
Vertex shader:
#version 130
uniform mat4 u_MVPMatrix; // A constant representing the combined model/view/projection matrix.
uniform mat4 u_MVMatrix; // A constant representing the combined model/view matrix.
uniform float u_NormalMapping; // Normal mapping; 0 - false, 1 - true
attribute vec4 a_Position; // Per-vertex position information we will pass in.
attribute vec3 a_Normal; // Per-vertex normal information we will pass in.
attribute vec3 a_Tangent; // Per-vertex tangent information we will pass in.
attribute vec3 a_Bitangent; // Per-vertex bitangent information we will pass in.
attribute vec2 a_Texture; // Per-vertex texture information we will pass in.
varying mat4 v_MVMatrix; // This will be passed into the fragment shader.
varying mat3 v_TBN; // This will be passed into the fragment shader.
varying vec4 v_Position; // This will be passed into the fragment shader.
varying vec3 v_Normal; // This will be passed into the fragment shader.
varying vec2 v_Texture; // This will be passed into the fragment shader.
varying float v_NormalMapping; // This will be passed into the fragment shader.
void main() {
// creating TBN (tangent-bitangent-normal) matrix if normal mapping enabled
if (u_NormalMapping == 1) {
vec3 T = normalize(vec3(u_MVMatrix * vec4(a_Tangent, 0.0)));
vec3 B = normalize(vec3(u_MVMatrix * vec4(a_Bitangent, 0.0)));
vec3 N = normalize(vec3(u_MVMatrix * vec4(a_Normal, 0.0)));
mat3 TBN = mat3(T, B, N);
v_TBN = TBN;
}
// gl_Position is a special variable used to store the final position.
// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
gl_Position = u_MVPMatrix * a_Position;
// sending all needed variables to fragment shader
v_Position = a_Position;
v_Texture = a_Texture;
v_NormalMapping = u_NormalMapping;
v_MVMatrix = u_MVMatrix;
v_Normal = a_Normal;
}
Vertex shadow shader:
#version 130
attribute vec3 a_Position;
uniform mat4 u_ModelMatrix;
void main() {
gl_Position = u_ModelMatrix * vec4(a_Position, 1.0);
}
Fragment shadow shader:
#version 330 core
in vec4 fragPos;
uniform vec3 lightPos; // cameraViewMatrix * lamp world coordinates
uniform float far_plane = 16;
void main()
{
float lightDistance = length(fragPos.xyz - lightPos);
// map to [0;1] range by dividing by far_plane
lightDistance = lightDistance / far_plane;
// write this as modified depth
gl_FragDepth = lightDistance;
}
Geometry shadow shader:
#version 330 core
layout (triangles) in;
layout (triangle_strip, max_vertices=18) out;
uniform mat4 shadowMatrices[6];
out vec4 fragPos; // FragPos from GS (output per emitvertex)
void main() {
for(int face = 0; face < 6; ++face) {
gl_Layer = face; // built-in variable that specifies to which face we render.
for(int i = 0; i < 3; ++i) // for each triangle's vertices
{
fragPos = gl_in[i].gl_Position;
gl_Position = shadowMatrices[face] * fragPos;
EmitVertex();
}
EndPrimitive();
}
}
And a video demonstrating visualization shadow map:
https://youtu.be/zaNXGG1qLaw
I understand that I missed the transformation somewhere, or did some extra, but where - I really do not know.
The content of shadowMaps[textureIndex] is probably a depth map taken in "light space". This means it is a depth map as seen from the light source.
But
fragPos = vec3(v_MVMatrix * v_Position);
and
struct Lamp {
.....
vec3 lampPos; // in eye space, cameraViewMatrix * lamp world coordinates
.....
};
are in view space coordiantes. This causes that
vec3 fragToLight = fragPos - lightPos;
is a direction in view space, as seen from the camera.
If you do
float closestDepth = texture(shadowMaps[textureIndex], fragToLight).r;
then a "light space" map is accessed by a "view space" vector. The transformation from view space coordiantes to "light space" coordiantes is missing.
To solve the issue you need a matrix which transforms from world coordinates to "light space" coordinates. This is the inverse matrix, of that view projection matrix, which you used, when you create shadowMaps.
mat4 inverse_light_vp_mat[MAX_LAMPS_COUNT];
The fragment position has to be transformed to world coordinates, then it has to be transformed to "light space" coordinates, with inverse_light_vp_mat:
varying mat4 v_ModelMatrix; // Model matrix
vec4 fragLightPos = inverse_light_vp_mat[textureIndex] * v_ModelMatrix * v_Position;
fragLightPos.xyz /= fragLightPos.w;
In "light space" the light position is vec3( 0.0, 0.0, 0.0 ), because the position of the light source is the origin of the "light space". So the look up in the shadowMaps can be done directly with fragLightPos:
float closestDepth = texture(shadowMaps[textureIndex], fragLightPos.xyz).r;
The problem was solved. It was due to the fact that I considered a map of shadows in the camera space (view space), but it was necessary in the world space. Also, during the calculation of the shadow itself, it was also necessary to calculate everything in the world space.
Fragment shader:
vec3 fragToLight = vec3(model * v_Position) - lightPosWorldSpace;
or
vec3 fragToLight = vec3(model * v_Position) - vec3(inverse(view) * lightPos); (lightPos - vec4)
Fragment shadow shader:
float lightDistance = length(fragPos.xyz - lightPos);, lightPos - lamp position in world space

OpenGL Programmable Pipeline Point Lights

Since built-in uniforms such as gl_LightSource are now marked as deprecated in the latest versions of the OpenGL specification, I am currently implementing a basic lighting system (point lights right now) which receives all the light and material information through custom uniform variables.
I have implemented the light attenuation and specular highlights for a point light, and it seems to be working good, apart from a position glitch: I'm manually moving the light, altering its position along the X axis. The light source however (judging by the light it casts upon the square plane below it) doesn't seem to move along the X axis, but, rather, diagonally, on both the X and Z axes (possibly Y too, though it's not entirely a positioning bug).
Here's a screenshot of what the distortion looks like (the light is at -35, 5, 0, Suzanne ist at 0, 2, 0:
:
It looks OK when the light is at 0, 5, 0:
According to the OpenGL specification, all the default light computations take place in eye coordinates, which is what I'm trying to emulate here (hence the multiplication of the light position with the vMatrix). I am using just the view matrix, since applying the model transformation of the vertex batch being rendered to the light doesn't really make sense.
If it matters, all the plane's normals are pointing straight up - 0, 1, 0.
(Note: I fixed the issue now, thanks to msell and myAces! The following snippets are the corrected versions. There's also an option to add spotlight parameters to the light now (d3d style ones))
Here's the code I'm using in the vertex shader:
#version 330
uniform mat4 mvpMatrix;
uniform mat4 mvMatrix;
uniform mat4 vMatrix;
uniform mat3 normalMatrix;
uniform vec3 vLightPosition;
uniform vec3 spotDirection;
uniform bool useTexture;
uniform bool fogEnabled;
uniform float minFogDistance;
uniform float maxFogDistance;
in vec4 vVertex;
in vec3 vNormal;
in vec2 vTexCoord;
smooth out vec3 vVaryingNormal;
smooth out vec3 vVaryingLightDir;
smooth out vec2 vVaryingTexCoords;
smooth out float fogFactor;
smooth out vec4 vertPos_ec;
smooth out vec4 lightPos_ec;
smooth out vec3 spotDirection_ec;
void main() {
// Surface normal in eye coords
vVaryingNormal = normalMatrix * vNormal;
vec4 vPosition4 = mvMatrix * vVertex;
vec3 vPosition3 = vPosition4.xyz / vPosition4.w;
vec4 tLightPos4 = vMatrix * vec4(vLightPosition, 1.0);
vec3 tLightPos = tLightPos4.xyz / tLightPos4.w;
// Diffuse light
// Vector to light source (do NOT normalize this!)
vVaryingLightDir = tLightPos - vPosition3;
if(useTexture) {
vVaryingTexCoords = vTexCoord;
}
lightPos_ec = vec4(tLightPos, 1.0f);
vertPos_ec = vec4(vPosition3, 1.0f);
// Transform the light direction (for spotlights)
vec4 spotDirection_ec4 = vec4(spotDirection, 1.0f);
spotDirection_ec = spotDirection_ec4.xyz / spotDirection_ec4.w;
spotDirection_ec = normalMatrix * spotDirection;
// Projected vertex
gl_Position = mvpMatrix * vVertex;
// Fog factor
if(fogEnabled) {
float len = length(gl_Position);
fogFactor = (len - minFogDistance) / (maxFogDistance - minFogDistance);
fogFactor = clamp(fogFactor, 0, 1);
}
}
And this is the code I'm using in the fragment shader:
#version 330
uniform vec4 globalAmbient;
// ADS shading model
uniform vec4 lightDiffuse;
uniform vec4 lightSpecular;
uniform float lightTheta;
uniform float lightPhi;
uniform float lightExponent;
uniform int shininess;
uniform vec4 matAmbient;
uniform vec4 matDiffuse;
uniform vec4 matSpecular;
// Cubic attenuation parameters
uniform float constantAt;
uniform float linearAt;
uniform float quadraticAt;
uniform float cubicAt;
// Texture stuff
uniform bool useTexture;
uniform sampler2D colorMap;
// Fog
uniform bool fogEnabled;
uniform vec4 fogColor;
smooth in vec3 vVaryingNormal;
smooth in vec3 vVaryingLightDir;
smooth in vec2 vVaryingTexCoords;
smooth in float fogFactor;
smooth in vec4 vertPos_ec;
smooth in vec4 lightPos_ec;
smooth in vec3 spotDirection_ec;
out vec4 vFragColor;
// Cubic attenuation function
float att(float d) {
float den = constantAt + d * linearAt + d * d * quadraticAt + d * d * d * cubicAt;
if(den == 0.0f) {
return 1.0f;
}
return min(1.0f, 1.0f / den);
}
float computeIntensity(in vec3 nNormal, in vec3 nLightDir) {
float intensity = max(0.0f, dot(nNormal, nLightDir));
float cos_outer_cone = lightTheta;
float cos_inner_cone = lightPhi;
float cos_inner_minus_outer = cos_inner_cone - cos_outer_cone;
// If we are a point light
if(lightTheta > 0.0f) {
float cos_cur = dot(normalize(spotDirection_ec), -nLightDir);
// d3d style smooth edge
float spotEffect = clamp((cos_cur - cos_outer_cone) /
cos_inner_minus_outer, 0.0, 1.0);
spotEffect = pow(spotEffect, lightExponent);
intensity *= spotEffect;
}
float attenuation = att( length(lightPos_ec - vertPos_ec) );
intensity *= attenuation;
return intensity;
}
/**
* Phong per-pixel lighting shading model.
* Implements basic texture mapping and fog.
*/
void main() {
vec3 ct, cf;
vec4 texel;
float at, af;
if(useTexture) {
texel = texture2D(colorMap, vVaryingTexCoords);
} else {
texel = vec4(1.0f);
}
ct = texel.rgb;
at = texel.a;
vec3 nNormal = normalize(vVaryingNormal);
vec3 nLightDir = normalize(vVaryingLightDir);
float intensity = computeIntensity(nNormal, nLightDir);
cf = matAmbient.rgb * globalAmbient.rgb + intensity * lightDiffuse.rgb * matDiffuse.rgb;
af = matAmbient.a * globalAmbient.a + lightDiffuse.a * matDiffuse.a;
if(intensity > 0.0f) {
// Specular light
// - added *after* the texture color is multiplied so that
// we get a truly shiny result
vec3 vReflection = normalize(reflect(-nLightDir, nNormal));
float spec = max(0.0, dot(nNormal, vReflection));
float fSpec = pow(spec, shininess) * lightSpecular.a;
cf += intensity * vec3(fSpec) * lightSpecular.rgb * matSpecular.rgb;
}
// Color modulation
vFragColor = vec4(ct * cf, at * af);
// Add the fog to the mix
if(fogEnabled) {
vFragColor = mix(vFragColor, fogColor, fogFactor);
}
}
What math bug could be causing this distortion?
Edit 1:
I've updated the shader code. The attenuation is now being computed in the fragment shader, as it should have been all along. It's currently disabled, though - the bug doesn't have anything to do with the attenuation. When rendering only the attenuation factor of the light (see the last few lines of the fragment shader), the attenuation is being computed right. This means that the light position is being correctly transformed into eye coordinates, so it can't be the source of the bug.
The last few lines of the fragment shader can be used for some (slightly hackish but nevertheless insightful) debugging - it seems the intensity of the light is not being computed right per-fragment, though I have no idea why.
What's interesting is that this bug is only noticeable on (very) large quads like the floor in the images. It's not noticeable on small models.
Edit 2:
I've updated the shader code to a working version. It's all good now and I hope it helps any future user reading this, since as of today, I have yet to see any glsl tutorial that implements lights with absolutely no fixed functionality and secret implicit transforms (such as gl_LightSource[i].* and the implicit transformations to eye space).
My code is licensed under the BSD 2-clause license and can be found on GitHub!
I recently had a similar problem, where lighting worked somewhat wrong when using large polygons. The problem was normalizing the eye vector in vertex shader, as interpolating normalized values procudes incorrect results.
Change
vVaryingLightDir = normalize( tLightPos - vPosition3 );
to
vVaryingLightDir = tLightPos - vPosition3;
in your vertex shader. You can keep the normalization in the fragment shader.
Just because I noticed:
vec3 tLightPos = (vMatrix * vec4(vLightPosition, 1.0)).xyz;
you are simply eliminating the homogenous coordinate here, without dividing through it first. This will cause some problems.