I've adapted Cinder library signed distance fonthandling to Delphi, and am now implementing a twist to upload all data for multiple texts in a single call, and to and to have some control over relative size when zooming (using an uniform instead of the 1.0001 factor in the geometry shader, not yet working in this code)
The basic signed distance handling is not altered, I only tried to calculate the needed rectangles using the geometry shader. I understand how to create the destination rectangle (where the character must appear) using triangle_strip, but are having problems passing the texcoord to the fragment shader.
destination rectangle : the input topleft.xy + widthheight (dimensions) is used to calculate the destination rectangle of each character on the screen. Using gl_position.
texture source rectangle 2: texcoordtl+texdimens, topleft point + dimensions for the character in the font texture. This is the main point where I'm unsure. Passed to fragment using texcoord in/out param.
I'd be grateful for any pointers or avenues to research and specially wonder about the way I calculate the texcoord coordinates and pass them on
to the fragment shader.
An array of the below record is bound with GL_ARRAY_BUFFER and described using a series of glGetAttribLocation/glEnableVertexAttribArray/glVertexAttribPointer calls)
Drawing is done using
glDrawArrays(GL_POINTS, 0, numberofelements_in_array );
The record:
TGLCharacter = packed record // 5*2* single + 1*4 byte color + 1*4 byte detail. = 48 bytes per character drawn
origin : TGLVectorf2; // origin of text ( = glfloat[2])
topleft : TGLVectorf2; // origin of this character
widthheight: TGLVectorf2; // width and heght this chracter
texcoordtl : TGLVectorf2; // coordinates topleft in texture.
texdimens : TGLVectorf2; // sizes in texture
col : TGLVectorub4; // 4 colors, 1 per rect vertex
detail : integer; // layer. Not used in this example.
end;
geometry code first because I expect the problems here:
#version 150 compatibility
layout(points) in;
layout(triangle_strip, max_vertices = 4) out;
in vec2 gorigin[];
in vec2 gtopleft[];
in vec2 gwidthheight[];
in vec2 gtexcoordtl[];
in vec2 gtexdimens[];
in vec4 gcolor[];
out vec3 fColor;
out vec2 texcoord;
void main() {
// calculate distance cur char - first char of this text
vec2 dxcoordinate = (gtopleft[0]-gorigin[0]);
// now multiply with uniform here and calc new coordinate:
// for now we use uniform slightly close to 1 to make debugging easier and avoid
// nvidia's shadercompiler to optimize gorigin out.
// equal to 1, and the nvidia shader optimizes it out.
vec2 x1y1 = 1.0001*gorigin[0]+dxcoordinate;
vec2 x2y2 = x1y1+gwidthheight[0]*1.0001;
vec2 texx1y1 = gtexcoordtl[0];
vec2 texx2y2 = gtexcoordtl[0]+gtexdimens[0];
fColor = vec3(gcolor[0].rgb);
gl_Position = gl_ModelViewProjectionMatrix * vec4(x1y1,0,1.0);
texcoord = texx1y1.xy;
EmitVertex();
gl_Position = gl_ModelViewProjectionMatrix * vec4(x2y2.x,x1y1.y,0,1.0);
texcoord = vec2(texx2y2.x,texx1y1.y);
EmitVertex();
gl_Position= gl_ModelViewProjectionMatrix * vec4(x1y1.x,x2y2.y,0,1.0);
texcoord = vec2(texx1y1.x,texx2y2.y);
EmitVertex();
gl_Position = gl_ModelViewProjectionMatrix * vec4(x2y2,0,1.0);
texcoord = texx2y2.xy;
EmitVertex();
EndPrimitive();
}
frag code:
#version 150 compatibility
uniform sampler2D font_map;
uniform float smoothness;
const float gamma = 2.2;
in vec3 fColor;
in vec2 texcoord;
void main()
{
// retrieve signed distance
float sdf = texture2D( font_map, texcoord.xy ).r;
// perform adaptive anti-aliasing of the edges
float w = clamp( smoothness * (abs(dFdx(texcoord.x)) + abs(dFdy(texcoord.y))), 0.0, 0.5);
float a = smoothstep(0.5-w, 0.5+w, sdf);
// gamma correction for linear attenuation
a = pow(a, 1.0/gamma);
if (a<0.1)
discard;
// final color
gl_FragColor.rgb = fColor.rgb;
gl_FragColor.a = gl_Color.a * a;
}
vertex code is probably ok I guess.
#version 150 compatibility
in vec2 anorigin;
in vec2 topleft;
in vec2 widthheight;
in vec2 texcoordtl;
in vec2 texdimens;
in vec4 color;
out vec2 gorigin;
out vec2 gtopleft;
out vec2 gwidthheight;
out vec2 gtexcoordtl;
out vec2 gtexdimens;
out vec4 gcolor;
void main()
{
gorigin=anorigin;
gtopleft=topleft;
gwidthheight=widthheight;
gtexcoordtl=texcoordtl;
gtexdimens=texdimens;
gcolor=color;
gl_Position = gl_ModelViewProjectionMatrix * vec4(anorigin.xy,0,1.0);;
}
The above code works. The problem was in the uploading code, so wrong vertex data was uploaded. I did some minor fixes to the above code while debugging and added it to the question, so that the question now shows working code.
Here is some possible code that changes the last 2 lines of the frag shader to create outline fonts. I'm not really happy yet with it though. When zooming out the color of the font seems to change
vec3 othercol; // to be added to declarations
.. rest shader below the discard statement becomes:
othercol=vec3(1.0,1.0,1.0);
if (sqrt(0.299 * fColor.r*fColor.r + 0.587 * fColor.g*fColor.g + 0.114 * fColor.b*fColor.b)>0.5)
{ othercol=vec3(0,0,0);}
// final color
if (sdf>0.25 && sdf<0.75)
{gl_FragColor.rgb = othercol.rgb;}
else
{gl_FragColor.rgb = fColor.rgb;}
gl_FragColor.a = gl_Color.a * a;
}
Related
I want to texture my terrain without predetermined texture coordinates. I want to determine the coordinates in the vertex or fragmant shader using vertex position coordinates. I now use position 'xz' coordinates (up=(0,1,0)), but if I have a for example wall which is 90 degrees with the ground the texture will be like this:
How can I transform this position these coordinates to work well?
Here's my vertex shader:
#version 430
in layout(location=0) vec3 position;
in layout(location=1) vec2 textCoord;
in layout(location=2) vec3 normal;
out vec3 pos;
out vec2 text;
out vec3 norm;
uniform mat4 transformation;
void main()
{
gl_Position = transformation * vec4(position, 1.0);
norm = normal;
pos = position;
text = position.xz;
}
And here's my fragmant shader:
#version 430
in vec3 pos;
in vec2 text;
in vec3 norm;
//uniform sampler2D textures[3];
layout(binding=3) uniform sampler2D texture_1;
layout(binding=4) uniform sampler2D texture_2;
layout(binding=5) uniform sampler2D texture_3;
vec3 lightPosition = vec3(-200, 700, 50);
vec3 lightAmbient = vec3(0,0,0);
vec3 lightDiffuse = vec3(1,1,1);
vec3 lightSpecular = vec3(1,1,1);
out vec4 fragColor;
vec4 theColor;
void main()
{
vec3 unNormPos = pos;
vec3 lightVector = normalize(lightPosition) - normalize(pos);
//lightVector = normalize(lightVector);
float cosTheta = clamp(dot(normalize(lightVector), normalize(norm)), 0.5, 1.0);
if(pos.y <= 120){
fragColor = texture2D(texture_2, text*0.05) * cosTheta;
}
if(pos.y > 120 && pos.y < 150){
fragColor = (texture2D(texture_2, text*0.05) * (1 - (pos.y-120)/29) + texture2D(texture_3, text*0.05) * ((pos.y-120)/29))*cosTheta;
}
if(pos.y >= 150)
{
fragColor = texture2D(texture_3, text*0.05) * cosTheta;
}
}
EDIT: (Fons)
text = 0.05 * (position.xz + vec2(0,position.y));
text = 0.05 * (position.xz + vec2(position.y,position.y));
Now the wall work but terrain not.
The problem is actually a very difficult one, since you cannot devise a formula for the texture coordinates that displays vertical walls correctly, using only the xyz coordinates.
To visualize this, imagine a hill next to a piece of flat land. Since the path going over the hill is longer than that going over the flat piece of land, the texture should wrap more times on the hill the on the flat piece of land. In the image below, the texture wraps 5 times on the hill and 4 times on the flat piece.
If the texture coordinates are (0,0) on the left, should they be (4,0) or (5,0) on the right? Since both answers are valid, this proves that there is no function that calculates correct texture coordinates based purely on the xyz coordinates. :(
However, your problems might be solved with different methods:
The walls can be corrected by generating them independently from the terrain, and assigning correct texture coordinates to them. It actually makes more sense not to incorporate those in your terrain.
You can add more detail to the sides of steep hills with normal maps, textures of higher resolution, or a combination of different textures. There might be a better solution that I don't know about.
Edit: Triplanar mapping will solve your problem!
Try:
text = position.xz + vec2(0,y);
Also, I recommend setting the *0.05 scale factor in the vertex shader instead of the fragment shader. The final code would be:
text = 0.05 * (position.xz + vec2(0,y));
I've been trying to implement a simple light / shading system, a simple Phong lighting system without specular lights to be precise. It basically works, except it has some (in my opinion) nasty artifacts.
My first thought was that maybe this is a problem of the texture mipmaps, but disabling them didn't work. My next best guess would be a shader issue, but I can't seem to find the error.
Has anybody ever experienced a similiar issue or an idea on how to solve this?
Image of the artifacts
Vertex shader:
#version 330 core
// Vertex shader
layout(location = 0) in vec3 vpos;
layout(location = 1) in vec2 vuv;
layout(location = 2) in vec3 vnormal;
out vec2 uv; // UV coordinates
out vec3 normal; // Normal in camera space
out vec3 pos; // Position in camera space
out vec3 light[3]; // Vertex -> light vector in camera space
uniform mat4 mv; // View * model matrix
uniform mat4 mvp; // Proj * View * Model matrix
uniform mat3 nm; // Normal matrix for transforming normals into c-space
void main() {
// Pass uv coordinates
uv = vuv;
// Adjust normals
normal = nm * vnormal;
// Calculation of vertex in camera space
pos = (mv * vec4(vpos, 1.0)).xyz;
// Vector vertex -> light in camera space
light[0] = (mv * vec4(0.0,0.3,0.0,1.0)).xyz - pos;
light[1] = (mv * vec4(-6.0,0.3,0.0,1.0)).xyz - pos;
light[2] = (mv * vec4(0.0,0.3,4.8,1.0)).xyz - pos;
// Pass position after projection transformation
gl_Position = mvp * vec4(vpos, 1.0);
}
Fragment shader:
#version 330 core
// Fragment shader
layout(location = 0) out vec3 color;
in vec2 uv; // UV coordinates
in vec3 normal; // Normal in camera space
in vec3 pos; // Position in camera space
in vec3 light[3]; // Vertex -> light vector in camera space
uniform sampler2D tex;
uniform float flicker;
void main() {
vec3 n = normalize(normal);
// Ambient
color = 0.05 * texture(tex, uv).rgb;
// Diffuse lights
for (int i = 0; i < 3; i++) {
l = normalize(light[i]);
cos = clamp(dot(n,l), 0.0, 1.0);
length = length(light[i]);
color += 0.6 * texture(tex, uv).rgb * cos / pow(length, 2);
}
}
As the first comment says, it looks like your color computation is using insufficient precision. Try using mediump or highp floats.
Additionally, the length = length(light[i]); pow(length,2) expression is quite inefficient, and could also be a source of the observed banding; you should use dot(light[i],light[i]) instead.
So i found information about my problem described as "gradient banding", also discussed here. The problem appears to be in the nature of my textures, since both, only the "white" texture and the real texture are mostly grey/white and there are effectively 256 levels of grey when using 8 bit per color channel.
The solution would be to implement post-processing dithering or to use better textures.
I'm doing per-pixel lighting(phong shading) on my terrain. I'm using a heightmap to generate the terrain height and then calculating the normal for each vertex. The normals are interpolated in the fragment shader and also normalized.
I am getting some weird dark lines near the edges of triangles where there shouldn't be.
http://imgur.com/L2kj4ca
I checked if the normals were correct using a geometry shader to draw the normals on the terrain and they seem to be correct.
http://imgur.com/FrJpdXI
There is no point using a normal map for the terrain it will just give pretty much the same normals. The problem lies with the way the normals are interpolated across a triangle.
I am out of idea's how to solve this. I couldn't find any working solution online.
Terrain Vertex Shader:
#version 330 core
layout (location = 0) in vec3 position;
layout (location = 1) in vec3 normal;
layout (location = 2) in vec2 textureCoords;
out vec2 pass_textureCoords;
out vec3 surfaceNormal;
out vec3 toLightVector;
out float visibility;
uniform mat4 transformationMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
uniform vec3 lightPosition;
const float density = 0.0035;
const float gradient = 5.0;
void main()
{
vec4 worldPosition = transformationMatrix * vec4(position, 1.0f);
vec4 positionRelativeToCam = viewMatrix * worldPosition;
gl_Position = projectionMatrix * positionRelativeToCam;
pass_textureCoords = textureCoords;
surfaceNormal = (transformationMatrix * vec4(normal, 0.0f)).xyz;
toLightVector = lightPosition - worldPosition.xyz;
float distance = length(positionRelativeToCam.xyz);
visibility = exp(-pow((distance * density), gradient));
visibility = clamp(visibility, 0.0, 1.0);
}
Terrain Fragment Shader:
#version 330 core
in vec2 pass_textureCoords;
in vec3 surfaceNormal;
in vec3 toLightVector;
in float visibility;
out vec4 colour;
uniform vec3 lightColour;
uniform vec3 fogColour;
uniform sampler2DArray blendMap;
uniform sampler2DArray diffuseMap;
void main()
{
vec4 blendMapColour = texture(blendMap, vec3(pass_textureCoords, 0));
float backTextureAmount = 1 - (blendMapColour.r + blendMapColour.g + blendMapColour.b);
vec2 tiledCoords = pass_textureCoords * 255.0;
vec4 backgroundTextureColour = texture(diffuseMap, vec3(tiledCoords, 0)) * backTextureAmount;
vec4 rTextureColour = texture(diffuseMap, vec3(tiledCoords, 1)) * blendMapColour.r;
vec4 gTextureColour = texture(diffuseMap, vec3(tiledCoords, 2)) * blendMapColour.g;
vec4 bTextureColour = texture(diffuseMap, vec3(tiledCoords, 3)) * blendMapColour.b;
vec4 diffuseColour = backgroundTextureColour + rTextureColour + gTextureColour + bTextureColour;
vec3 unitSurfaceNormal = normalize(surfaceNormal);
vec3 unitToLightVector = normalize(toLightVector);
float brightness = dot(unitSurfaceNormal, unitToLightVector);
float ambient = 0.2;
brightness = max(brightness, ambient);
vec3 diffuse = brightness * lightColour;
colour = vec4(diffuse, 1.0) * diffuseColour;
colour = mix(vec4(fogColour, 1.0), colour, visibility);
}
This can be either two issues :
1. Incorrect normals :
There is different types of shading : Flat shading, Gouraud shading and Phong shading (different of Phong specular) example :
You usually want to do a Phong shading. To do that, OpenGL make your life easier and interpolate for you the normals between each vertex of each triangle, so at each pixel you have the correct normal for this point: but you still need to feed it proper normal values, that are the average of the normals of every triangles attached to this vertex. So in your function that create the vertex, the normals and the UVs, you need to compute the normal at each vertex by averaging every triangle normal attached to this vertex. illustration
2. Subdivision problem :
The other possible issue is that your terrain is not subdivided enough, or your heightmap resolution is too low, resulting to this kind of glitch because of the difference of height between two vertex in one triangle (so between two pixels in your heightmap).
Maybe if you can provide some of your code and shaders, maybe even the heightmap so we can pin exactly what is happening in your case.
This is old, but I suspect you're not transforming your normal using the transposed inverse of the upper 3x3 part of your modelview matrix. See this. Not sure what's in "transformationMatrix", but if you're using it to transform the vertex and the normal something is probably fishy...
I'm working on OpenGL application using the QT5 Gui framework, However, I'm not an expert in OpenGL and I'm facing a couple of issues when trying to simulate directional light. I'm using 'almost' the same algorithm I used in an WebGL application which works just fine.
The application is used to render multiple adjacent cells of a large gridblock (each of which is represented by 8 independent vertices) meaning that some vertices of the whole gridblock are duplicated in the VBO. the normals are calculated per face in geometry shader as shown below in the code.
QOpenGLWidget paintGL() body.
void OpenGLWidget::paintGL()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
m_camera = camera.toMatrix();
m_world.setToIdentity();
m_program->bind();
m_program->setUniformValue(m_projMatrixLoc, m_proj);
m_program->setUniformValue(m_mvMatrixLoc, m_camera * m_world);
QMatrix3x3 normalMatrix = (m_camera * m_world).normalMatrix();
m_program->setUniformValue(m_normalMatrixLoc, normalMatrix);
QVector3D lightDirection = QVector3D(1,1,1);
lightDirection.normalize();
QVector3D directionalColor = QVector3D(1,1,1);
QVector3D ambientLight = QVector3D(0.2,0.2,0.2);
m_program->setUniformValue(m_lightDirectionLoc, lightDirection);
m_program->setUniformValue(m_directionalColorLoc, directionalColor);
m_program->setUniformValue(m_ambientColorLoc, ambientLight);
geometries->drawGeometry(m_program);
m_program->release();
}
}
Vertex Shader
#version 330
layout(location = 0) in vec4 vertex;
uniform mat4 projMatrix;
uniform mat4 mvMatrix;
void main()
{
gl_Position = projMatrix * mvMatrix * vertex;
}
Geometry Shader
#version 330
layout ( triangles ) in;
layout ( triangle_strip, max_vertices = 3 ) out;
out vec3 transformedNormal;
uniform mat3 normalMatrix;
void main()
{
vec3 A = gl_in[2].gl_Position.xyz - gl_in[0].gl_Position.xyz;
vec3 B = gl_in[1].gl_Position.xyz - gl_in[0].gl_Position.xyz;
gl_Position = gl_in[0].gl_Position;
transformedNormal = normalMatrix * normalize(cross(A,B));
EmitVertex();
gl_Position = gl_in[1].gl_Position;
transformedNormal = normalMatrix * normalize(cross(A,B));
EmitVertex();
gl_Position = gl_in[2].gl_Position;
transformedNormal = normalMatrix * normalize(cross(A,B));
EmitVertex();
EndPrimitive();
}
Fragment Shader
#version 330
in vec3 transformedNormal;
out vec4 fColor;
uniform vec3 lightDirection;
uniform vec3 ambientColor;
uniform vec3 directionalColor;
void main()
{
highp float directionalLightWeighting = max(dot(transformedNormal, lightDirection), 0.0);
vec3 vLightWeighting = ambientColor + directionalColor * directionalLightWeighting;
highp vec3 color = vec3(1, 1, 0.0);
fColor = vec4(color*vLightWeighting, 1.0);
}
The 1st issue is that lighting on the faces seems to change whenever the camera angle changes (camera location doesn't affect it, only the angle). You can see this behavior in the following snapshot. My guess is that I'm doing something wrong when calculating the normal matrix, but I can't figure out what it is.
The 2nd issue (The one causing me headaches) is whenever The camera is moved, edges of the cells show blocky and rigged lines that flickers when the camera moves around. this effect gets really nasty when there are too many cells clustered together.
The model used in the snapshot is just a sample slab of 10 cells to better illustrate the faulty effects. The actual models (gridblock) contain up to 200K cells stacked together.
EDIT: 2nd issue solution.
I was using znear/zfar of 0.01f and 50000.0f respecticvely, when I
changed the znear to 1.0f, this effect disappeared. According to OpenGL Wiki this is caused by a zNear clipping plane value that's too close to 0.0. As the zNear clipping plane is set increasingly closer to 0.0, the effective precision of the depth buffer decreases dramatically
EDIT2: I tried debug drawing the normals as suggested in the comments,
I quickly realized that I probably shouldn't calculate them based on
gl_Position (after MVP matrix multiplication in VS) instead I should use the
original vertex locations, so i modified the the shaders as follows:
Vertex Shader (UPDATED)
#version 330
layout(location = 0) in vec4 vertex;
out vec3 vert;
uniform mat4 projMatrix;
uniform mat4 mvMatrix;
void main()
{
vert = vertex.xyz;
gl_Position = projMatrix * mvMatrix * vertex;
}
Geometry Shader (UPDATED)
#version 330
layout ( triangles ) in;
layout ( triangle_strip, max_vertices = 3 ) out;
in vec3 vert [];
out vec3 transformedNormal;
uniform mat3 normalMatrix;
void main()
{
vec3 A = vert[2].xyz - vert[0].xyz;
vec3 B = vert[1].xyz - vert[0].xyz;
gl_Position = gl_in[0].gl_Position;
transformedNormal = normalize(normalMatrix * normalize(cross(A,B)));
EmitVertex();
gl_Position = gl_in[1].gl_Position;
transformedNormal = normalize(normalMatrix * normalize(cross(A,B)));
EmitVertex();
gl_Position = gl_in[2].gl_Position;
transformedNormal = normalize(normalMatrix * normalize(cross(A,B)));
EmitVertex();
EndPrimitive();
}
But even after this modification the normals of the surface still change with the camera angle, as shown below in the screenshot. I dont know if the normal calculation is wrong or the normal matrix calculation is done wrong or maybe both...
EDIT3: 1st Issue Solution: changing normal calculation in GS from
transformedNormal = normalize(normalMatrix * normalize(cross(A,B)));
to transformedNormal = normalize(cross(A,B)); seems to solve the
problem. Omitting the normalMatrix from the calculation fixed the
issue and the normals dont change with the viewing angle.
If I missed any important/relevant information, please notify me in a comment.
Depth buffer precision
Depth buffer is usually stored as 16 or 24 bit buffer. It is a HW implementation of float normalized to specific range. So you can see there is very few bits for mantissa/exponent in comparison to standard float.
if I oversimplify things and assume integer values instead float then for 16 bit buffer you got 2^16 values. if you got znear=0.1 and zfar=50000.0 then you got only 65535 values on the full range. Now as the Depth valued are nonlinear you got higher accuracy near znear and much much lower near zfar plane so the depth values will jump with higher and higher step causing accuracy problems where any 2 polygons are near.
I empirically got this for setting the planes in my views:
(zfar-znear)/desired_accuracy_step > 0.3*(2^n)
Where n is the depth buffer bit-width and desired_accuracy_step is the wanted resolution in Z axis I need. Sometimes I saw it exchanged by znear value.
Since built-in uniforms such as gl_LightSource are now marked as deprecated in the latest versions of the OpenGL specification, I am currently implementing a basic lighting system (point lights right now) which receives all the light and material information through custom uniform variables.
I have implemented the light attenuation and specular highlights for a point light, and it seems to be working good, apart from a position glitch: I'm manually moving the light, altering its position along the X axis. The light source however (judging by the light it casts upon the square plane below it) doesn't seem to move along the X axis, but, rather, diagonally, on both the X and Z axes (possibly Y too, though it's not entirely a positioning bug).
Here's a screenshot of what the distortion looks like (the light is at -35, 5, 0, Suzanne ist at 0, 2, 0:
:
It looks OK when the light is at 0, 5, 0:
According to the OpenGL specification, all the default light computations take place in eye coordinates, which is what I'm trying to emulate here (hence the multiplication of the light position with the vMatrix). I am using just the view matrix, since applying the model transformation of the vertex batch being rendered to the light doesn't really make sense.
If it matters, all the plane's normals are pointing straight up - 0, 1, 0.
(Note: I fixed the issue now, thanks to msell and myAces! The following snippets are the corrected versions. There's also an option to add spotlight parameters to the light now (d3d style ones))
Here's the code I'm using in the vertex shader:
#version 330
uniform mat4 mvpMatrix;
uniform mat4 mvMatrix;
uniform mat4 vMatrix;
uniform mat3 normalMatrix;
uniform vec3 vLightPosition;
uniform vec3 spotDirection;
uniform bool useTexture;
uniform bool fogEnabled;
uniform float minFogDistance;
uniform float maxFogDistance;
in vec4 vVertex;
in vec3 vNormal;
in vec2 vTexCoord;
smooth out vec3 vVaryingNormal;
smooth out vec3 vVaryingLightDir;
smooth out vec2 vVaryingTexCoords;
smooth out float fogFactor;
smooth out vec4 vertPos_ec;
smooth out vec4 lightPos_ec;
smooth out vec3 spotDirection_ec;
void main() {
// Surface normal in eye coords
vVaryingNormal = normalMatrix * vNormal;
vec4 vPosition4 = mvMatrix * vVertex;
vec3 vPosition3 = vPosition4.xyz / vPosition4.w;
vec4 tLightPos4 = vMatrix * vec4(vLightPosition, 1.0);
vec3 tLightPos = tLightPos4.xyz / tLightPos4.w;
// Diffuse light
// Vector to light source (do NOT normalize this!)
vVaryingLightDir = tLightPos - vPosition3;
if(useTexture) {
vVaryingTexCoords = vTexCoord;
}
lightPos_ec = vec4(tLightPos, 1.0f);
vertPos_ec = vec4(vPosition3, 1.0f);
// Transform the light direction (for spotlights)
vec4 spotDirection_ec4 = vec4(spotDirection, 1.0f);
spotDirection_ec = spotDirection_ec4.xyz / spotDirection_ec4.w;
spotDirection_ec = normalMatrix * spotDirection;
// Projected vertex
gl_Position = mvpMatrix * vVertex;
// Fog factor
if(fogEnabled) {
float len = length(gl_Position);
fogFactor = (len - minFogDistance) / (maxFogDistance - minFogDistance);
fogFactor = clamp(fogFactor, 0, 1);
}
}
And this is the code I'm using in the fragment shader:
#version 330
uniform vec4 globalAmbient;
// ADS shading model
uniform vec4 lightDiffuse;
uniform vec4 lightSpecular;
uniform float lightTheta;
uniform float lightPhi;
uniform float lightExponent;
uniform int shininess;
uniform vec4 matAmbient;
uniform vec4 matDiffuse;
uniform vec4 matSpecular;
// Cubic attenuation parameters
uniform float constantAt;
uniform float linearAt;
uniform float quadraticAt;
uniform float cubicAt;
// Texture stuff
uniform bool useTexture;
uniform sampler2D colorMap;
// Fog
uniform bool fogEnabled;
uniform vec4 fogColor;
smooth in vec3 vVaryingNormal;
smooth in vec3 vVaryingLightDir;
smooth in vec2 vVaryingTexCoords;
smooth in float fogFactor;
smooth in vec4 vertPos_ec;
smooth in vec4 lightPos_ec;
smooth in vec3 spotDirection_ec;
out vec4 vFragColor;
// Cubic attenuation function
float att(float d) {
float den = constantAt + d * linearAt + d * d * quadraticAt + d * d * d * cubicAt;
if(den == 0.0f) {
return 1.0f;
}
return min(1.0f, 1.0f / den);
}
float computeIntensity(in vec3 nNormal, in vec3 nLightDir) {
float intensity = max(0.0f, dot(nNormal, nLightDir));
float cos_outer_cone = lightTheta;
float cos_inner_cone = lightPhi;
float cos_inner_minus_outer = cos_inner_cone - cos_outer_cone;
// If we are a point light
if(lightTheta > 0.0f) {
float cos_cur = dot(normalize(spotDirection_ec), -nLightDir);
// d3d style smooth edge
float spotEffect = clamp((cos_cur - cos_outer_cone) /
cos_inner_minus_outer, 0.0, 1.0);
spotEffect = pow(spotEffect, lightExponent);
intensity *= spotEffect;
}
float attenuation = att( length(lightPos_ec - vertPos_ec) );
intensity *= attenuation;
return intensity;
}
/**
* Phong per-pixel lighting shading model.
* Implements basic texture mapping and fog.
*/
void main() {
vec3 ct, cf;
vec4 texel;
float at, af;
if(useTexture) {
texel = texture2D(colorMap, vVaryingTexCoords);
} else {
texel = vec4(1.0f);
}
ct = texel.rgb;
at = texel.a;
vec3 nNormal = normalize(vVaryingNormal);
vec3 nLightDir = normalize(vVaryingLightDir);
float intensity = computeIntensity(nNormal, nLightDir);
cf = matAmbient.rgb * globalAmbient.rgb + intensity * lightDiffuse.rgb * matDiffuse.rgb;
af = matAmbient.a * globalAmbient.a + lightDiffuse.a * matDiffuse.a;
if(intensity > 0.0f) {
// Specular light
// - added *after* the texture color is multiplied so that
// we get a truly shiny result
vec3 vReflection = normalize(reflect(-nLightDir, nNormal));
float spec = max(0.0, dot(nNormal, vReflection));
float fSpec = pow(spec, shininess) * lightSpecular.a;
cf += intensity * vec3(fSpec) * lightSpecular.rgb * matSpecular.rgb;
}
// Color modulation
vFragColor = vec4(ct * cf, at * af);
// Add the fog to the mix
if(fogEnabled) {
vFragColor = mix(vFragColor, fogColor, fogFactor);
}
}
What math bug could be causing this distortion?
Edit 1:
I've updated the shader code. The attenuation is now being computed in the fragment shader, as it should have been all along. It's currently disabled, though - the bug doesn't have anything to do with the attenuation. When rendering only the attenuation factor of the light (see the last few lines of the fragment shader), the attenuation is being computed right. This means that the light position is being correctly transformed into eye coordinates, so it can't be the source of the bug.
The last few lines of the fragment shader can be used for some (slightly hackish but nevertheless insightful) debugging - it seems the intensity of the light is not being computed right per-fragment, though I have no idea why.
What's interesting is that this bug is only noticeable on (very) large quads like the floor in the images. It's not noticeable on small models.
Edit 2:
I've updated the shader code to a working version. It's all good now and I hope it helps any future user reading this, since as of today, I have yet to see any glsl tutorial that implements lights with absolutely no fixed functionality and secret implicit transforms (such as gl_LightSource[i].* and the implicit transformations to eye space).
My code is licensed under the BSD 2-clause license and can be found on GitHub!
I recently had a similar problem, where lighting worked somewhat wrong when using large polygons. The problem was normalizing the eye vector in vertex shader, as interpolating normalized values procudes incorrect results.
Change
vVaryingLightDir = normalize( tLightPos - vPosition3 );
to
vVaryingLightDir = tLightPos - vPosition3;
in your vertex shader. You can keep the normalization in the fragment shader.
Just because I noticed:
vec3 tLightPos = (vMatrix * vec4(vLightPosition, 1.0)).xyz;
you are simply eliminating the homogenous coordinate here, without dividing through it first. This will cause some problems.