After finishing my PBR implementation I noticed that when I get close to a reflective surface, artifacts appear which I think are floating point errors.
I would really wish not to using doubles as my GPU tends to handle doubles in a performance heavy way
Here's my PBR IBL code (All the environment maps are correct as well as the deferred buffers):
// Deferred rendering
vec2 texcoord = gl_FragCoord.xy / textureSize(R_GBP, 0);
vec3 P = texture2D(R_GBP, texcoord).xyz;
vec4 A = texture2D(R_GBA, texcoord).rgba;
vec3 N = texture2D(R_GBN, texcoord).xyz;
float M = texture2D(R_GBM, texcoord).r;
float R = texture2D(R_GBR, texcoord).r;
vec3 V = normalize(R_CameraPos - P);
vec3 T = reflect(-V, N);
vec3 F0 = mix(vec3(0.04), A.rgb, M);
float NDV = max(dot(N, V), 0);
vec3 kS = FresnelSchlickRoughness(NDV, F0, R);
vec3 kD = (1 - kS) * (1 - M);
vec3 irr = textureCube(R_EnvironmentIrradiance, N).rgb;
vec3 dfs = irr * A.rgb;
const float MaxReflectionLod = 4;
vec3 preF_col = textureLod(R_EnvironmentPreFilter, T, R * MaxReflectionLod).rgb;
vec2 brdf = texture2D(R_EnvironmentBRDF, vec2(NDV, R)).rg;
vec3 specular = preF_col * (kS * brdf.x + brdf.y);
vec3 ambient = kD * dfs + specular;
Related
I'm rendering a sphere with instanced drawing, while rotating the model-view-matrix around the Y axis.
It looks ok at the beginning:
But at another angle, things get worse:
It looks to me like a problem with normals. Currently, I'm calculating the normal-matrix from my model-view-matrix and then pass it to the shader, which is doing phong-like lighting:
attribute vec4 a_position;
attribute vec3 a_normal;
attribute vec4 a_color;
attribute vec2 a_coord;
attribute mat4 a_matrix;
uniform mat4 u_mv_matrix;
uniform mat4 u_projection_matrix;
uniform mat3 u_normal_matrix;
varying vec4 v_position;
varying vec3 v_normal;
varying vec4 v_color;
varying vec2 v_coord;
void main() {
vec4 transformedPosition = u_mv_matrix * a_matrix * a_position;
v_position = transformedPosition;
v_normal = u_normal_matrix * a_normal;
v_color = a_color;
v_coord = a_coord;
gl_Position = u_projection_matrix * transformedPosition;
}
uniform sampler2D u_sampler;
varying vec4 v_position;
varying vec3 v_normal;
varying vec4 v_color;
varying vec2 v_coord;
void main() {
vec3 lightPosition = vec3(0.0); // XXX
// set diffuse and specular colors
vec3 cDiffuse = (v_color * texture2D(u_sampler, v_coord)).rgb;
vec3 cSpecular = vec3(0.3);
// lighting calculations
vec3 N = normalize(v_normal);
vec3 L = normalize(lightPosition - v_position.xyz);
vec3 E = normalize(-v_position.xyz);
vec3 H = normalize(L + E);
// Calculate coefficients.
float phong = max(dot(N, L), 0.0);
const float kMaterialShininess = 20.0;
const float kNormalization = (kMaterialShininess + 8.0) / (3.14159265 * 8.0);
float blinn = pow(max(dot(N, H), 0.0), kMaterialShininess) * kNormalization;
// diffuse coefficient
vec3 diffuse = phong * cDiffuse;
// specular coefficient
vec3 specular = blinn * cSpecular;
gl_FragColor = vec4(diffuse + specular, 1);
}
Final note: I'm working on desktop OpenGL 2.1 as well as WebGL on the browser.
Edit: Per request, I'm adding some information.
The mesh is built as follows, by passing an identity matrix:
void Sphere::append(IndexedVertexBatch<XYZ.N.UV> &batch, const Matrix &matrix) const {
float sectorStep = TWO_PI / sectorCount;
float stackStep = PI / stackCount;
for(int i = 0; i <= stackCount; ++i) {
float stackAngle = HALF_PI - i * stackStep;
float xy = radius * cosf(stackAngle);
float z = radius * sinf(stackAngle);
for(int j = 0; j <= sectorCount; ++j) {
float sectorAngle = j * sectorStep;
float x = xy * cosf(sectorAngle);
float y = xy * sinf(sectorAngle);
float nx = x / radius;
float ny = y / radius;
float nz = z / radius;
float s = (float)j / sectorCount;
float t = (float)i / stackCount;
batch.addVertex(matrix.transformPoint(x, y, z), matrix.transformNormal(nx, ny, nz), glm::vec2(s, t));
}
}
for(int i = 0; i < stackCount; ++i) {
float k1 = i * (sectorCount + 1);
float k2 = k1 + sectorCount + 1;
for(int j = 0; j < sectorCount; ++j, ++k1, ++k2) {
if (i != 0) {
if (frontFace == CCW) {
batch.addIndices(k1, k1 + 1, k2);
} else {
batch.addIndices(k1, k2, k1 + 1);
}
}
if (i != (stackCount - 1)) {
if (frontFace == CCW) {
batch.addIndices(k1 + 1, k2 + 1, k2);
} else {
batch.addIndices(k1 + 1, k2, k2 + 1);
}
}
}
}
}
Regarding the transformation matrices, it works as follow:
camera.getMVMatrix()
.setIdentity()
.translate(0, -150, -600)
.rotateY(clock()->getTime() * 0.5f);
State()
.setShader(shader)
.setShaderMatrix<MV>(camera.getMVMatrix())
.setShaderMatrix<PROJECTION>(camera.getProjectionMatrix())
.setShaderMatrix<NORMAL>(camera.getNormalMatrix())
.apply();
Finally, the light position is defined as vec3(0) in the fragment shader.
Note: As you can see, I'm using my own framework which provides among other things high level methods for building meshes and handling transformations. It's all straightforward stuff, proven to work as intended, but let me know if you need pointers to the source-code.
Update: The lighting part of the shader I used ended up being wrong, so I switched to another method.
But in essence, the solution I proposed in my answer is still valid (or at least it does the job of solving the "normal problem" when instancing is used, and non-uniform scaling is avoided.)
Here is a gist with the source-code. There is also an online WebGL demo.
The solution was relatively simple: there is no point in passing a normal-matrix to the shader.
Instead, the normal needs to be computed in the vertex shader:
v_normal = vec3(u_mv_matrix * a_matrix * vec4(a_normal, 0.0));
Credits
I am following along with the LearnOpenGL guide and am trying to implement Steep Parallax Mapping.
Everything seems to be working fine except my brick wall seems to have distinct visible layers whereas the photos in the guide don't show any layers. I was trying to use this code to parallax the topography of the world but these weird layers seem to show up there too so I was hoping to find a fix for this.
Layered wall photo
[1
Photo of how it should look
Here is my modified vertex shader
#version 300 es
in vec4 vPosition; // aPos
in vec2 texCoord; // aTexCoords
in vec4 vNormal; // aNormal
in vec4 vTangent; // aTangent
uniform mat4 model_view;
uniform mat4 projection;
uniform vec4 light_position;
out vec2 ftexCoord;
out vec3 vT;
out vec3 vN;
out vec4 position;
out vec3 FragPos;
out vec3 TangentLightPos;
out vec3 TangentViewPos;
out vec3 TangentFragPos;
void
main()
{
// Normal variables
vN = normalize(model_view * vNormal).xyz;
vT = normalize(model_view * vTangent).xyz;
vec4 veyepos = model_view*vPosition;
position = veyepos;
ftexCoord = texCoord;
// Displacement variables
vec3 bi = cross(vT, vN);
FragPos = vec3(model_view * vPosition).xyz;
vec3 T = normalize(mat3(model_view) * vTangent.xyz);
vec3 B = normalize(mat3(model_view) * bi);
vec3 N = normalize(mat3(model_view) * vNormal.xyz);
mat3 TBN = transpose(mat3(T, B, N));
TangentLightPos = TBN * light_position.xyz;
TangentViewPos = TBN * vPosition.xyz;
TangentFragPos = TBN * FragPos;
gl_Position = projection * model_view * vPosition;
}
and my modified fragment shader is here
#version 300 es
precision highp float;
in vec2 ftexCoord;
in vec3 vT; //parallel to surface in eye space
in vec3 vN; //perpendicular to surface in eye space
in vec4 position;
in vec3 FragPos;
in vec3 TangentLightPos;
in vec3 TangentViewPos;
in vec3 TangentFragPos;
uniform int mode;
uniform vec4 light_position;
uniform vec4 light_color;
uniform vec4 ambient_light;
uniform sampler2D colorMap;
uniform sampler2D normalMap;
uniform sampler2D depthMap;
out vec4 fColor;
// STEEP PARALLAX MAPPING
vec2 ParallaxMapping(vec2 texCoords, vec3 viewDir)
{
// number of depth layers
const float minLayers = 8.0;
const float maxLayers = 32.0;
float numLayers = mix(maxLayers, minLayers, abs(dot(vec3(0.0, 0.0, 1.0), viewDir)));
// calculate the size of each layer
float layerDepth = 1.0 / numLayers;
// depth of current layer
float currentLayerDepth = 0.0;
// the amount to shift the texture coordinates per layer (from vector P)
vec2 P = viewDir.xy / viewDir.z * 0.1;
vec2 deltaTexCoords = P / numLayers;
// get initial values
vec2 currentTexCoords = texCoords;
float currentDepthMapValue = texture(depthMap, currentTexCoords).r;
while(currentLayerDepth < currentDepthMapValue)
{
// shift texture coordinates along direction of P
currentTexCoords -= deltaTexCoords;
// get depthmap value at current texture coordinates
currentDepthMapValue = texture(depthMap, currentTexCoords).r;
// get depth of next layer
currentLayerDepth += layerDepth;
}
return currentTexCoords;
}
void main()
{
// DO NORMAL MAPPING
if (mode == 0) {
vec3 T = normalize(vT);
vec3 N = normalize(vN);
vec3 bi = cross(T, N);
mat4 changeOfCoord = mat4(vec4(T, 0), vec4(bi, 0), vec4(N, 0), vec4(0, 0, 0, 1));
vec3 L = normalize(light_position - position).xyz;
vec3 E = normalize(-position).xyz;
vec4 text = vec4(texture(normalMap, ftexCoord) * 2.0 - 1.0);
vec4 eye = changeOfCoord * text;
vec4 amb = texture(colorMap, ftexCoord) * ambient_light;
vec4 diff = max(0.0, dot(L, eye.xyz)) * light_color * texture(colorMap, ftexCoord);
fColor = amb + diff;
} else if (mode == 1) { // DO PARALLAX MAPPING
// offset texture coordinates with Parallax Mapping
vec3 viewDir = normalize(TangentViewPos - TangentFragPos);
vec2 texCoords = ftexCoord;
texCoords = ParallaxMapping(ftexCoord, viewDir);
// discard samples outside of the default texture coordinate space
if(texCoords.x > 1.0 || texCoords.y > 1.0 || texCoords.x < 0.0 || texCoords.y < 0.0)
discard;
// obtain normal from normal map
vec3 normal = texture(normalMap, texCoords).rgb;
//values stored in normal texture is [0,1] range, we need [-1, 1] range
normal = normalize(normal * 2.0 - 1.0);
// get diffuse color
vec3 color = texture(colorMap, texCoords).rgb;
// ambient
vec3 ambient = 0.1 * color;
// diffuse
vec3 lightDir = normalize(TangentLightPos - TangentFragPos);
float diff = max(dot(lightDir, normal), 0.0);
vec3 diffuse = diff * color;
// specular
vec3 reflectDir = reflect(lightDir, normal);
vec3 halfwayDir = normalize(lightDir + viewDir);
float spec = pow(max(dot(normal, halfwayDir), 0.0), 32.0);
vec3 specular = vec3(0.2) * spec;
fColor = vec4(ambient + diffuse + 0.0, 1.0);
}
}
The layers at acute gazing angles are a common effect at parallax mapping. To improve the result you've to increment the number of samples or implement Parallax Occlusion Mapping (as described in the bottom part of the tutorial):
// STEEP PARALLAX MAPPING
vec2 ParallaxMapping(vec2 texCoords, vec3 viewDir)
{
// number of depth layers
const float minLayers = 8.0;
const float maxLayers = 32.0;
float numLayers = mix(maxLayers, minLayers, abs(dot(vec3(0.0, 0.0, 1.0), viewDir)));
// calculate the size of each layer
float layerDepth = 1.0 / numLayers;
// depth of current layer
float currentLayerDepth = 0.0;
// the amount to shift the texture coordinates per layer (from vector P)
vec2 P = viewDir.xy / viewDir.z * 0.1;
vec2 deltaTexCoords = P / numLayers;
// get initial values
vec2 currentTexCoords = texCoords;
float currentDepthMapValue = texture(depthMap, currentTexCoords).r;
while(currentLayerDepth < currentDepthMapValue)
{
// shift texture coordinates along direction of P
currentTexCoords -= deltaTexCoords;
// get depthmap value at current texture coordinates
currentDepthMapValue = texture(depthMap, currentTexCoords).r;
// get depth of next layer
currentLayerDepth += layerDepth;
}
// get texture coordinates before collision (reverse operations)
vec2 prevTexCoords = currentTexCoords + deltaTexCoords;
// get depth after and before collision for linear interpolation
float afterDepth = currentDepthMapValue - currentLayerDepth;
float beforeDepth = texture(depthMap, prevTexCoords).r - currentLayerDepth + layerDepth;
// interpolation of texture coordinates
float weight = afterDepth / (afterDepth - beforeDepth);
vec2 finalTexCoords = prevTexCoords * weight + currentTexCoords * (1.0 - weight);
return finalTexCoords;
}
By thee way, the vector seems to be inverted. In common the bitangent is the Cross product of the normal vector and the tangent in a Right-handed system. But that depends on the displacement texture.
vec3 bi = cross(vT, vN);
vec3 bi = cross(vN, vT);
See further:
Bump Mapping with javascript and glsl
Normal, Parallax and Relief mapping
Demo
Edit:
In hindsight those images may be correct since it's just showing the vector differences, so assuming it's correct the issue is actually somewhere in the code regarding BRDF . I've added the full shader code and I'm attaching a new screenshot showing the artifacts I'm seeing. It seems to be over saturated in certain angles..
The issue is potentially in the distribution.. I tried a beckmann distribution model also and it showed the same type of issue..
See here as the light source moves down over the terrain from .. It's over saturating on the right hand side..
light at horizon
light just above horizon
I'm having some issues calculating directions in the vertex shader, the direction is skewed to one corner (the origin)
I create the terrain using instancing however the same issue happens if I just use a static plane.
my vertex shader looks like this (using ogre3d)
# version 330 compatibility
# define MAP_HEIGHT_FACTOR 50000
# define MAP_SCALE_FACTOR 100
#
// attributes
in vec4 blendIndices;
in vec4 uv0;
in vec4 uv1;
in vec4 uv2;
in vec4 position;
in vec2 vtx_texcoord0;
uniform mat4 viewProjMatrix;
uniform mat4 modelMatrix;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 worldMatrix;
uniform vec3 cameraPosition;
uniform vec3 sunPosition;
out vec4 vtxPosWorld;
out vec3 lightDirection;
out vec3 viewVector;
uniform sampler2D heightmap;
uniform mat4 worldViewProjMatrix;
void main()
{
vec4 vtxPosWorld = vec4((gl_Vertex.x * MAP_SCALE_FACTOR) + uv0.w,
(gl_Vertex.y * MAP_SCALE_FACTOR) + uv1.w,
(gl_Vertex.z * MAP_SCALE_FACTOR) + uv2.w,
1.0 ) * worldMatrix;
l_texcoord0 = vec2((vtxPosWorld.x)/(8192*MAP_SCALE_FACTOR), (vtxPosWorld.z)/(8192*MAP_SCALE_FACTOR));
vec4 hmt = texture(heightmap, l_texcoord0);
height = (hmt.x * MAP_HEIGHT_FACTOR);
// take the height from the heightmap
vtxPosWorld = vec4(vtxPosWorld.x, height, vtxPosWorld.z, vtxPosWorld.w);
lightDirection = vec4(normalize(vec4(sunPosition,1.0)) * viewMatrix).xyz;
viewVector = normalize((vec4(cameraPosition,1.0)*viewMatrix).xyz-(vtxPosWorld*viewMatrix).xyz);
l_Position = worldViewProjMatrix * vtxPosWorld;
}
fragment shader .
#version 330 compatibility
#define TERRAIN_SIZE 8192.0
#define HEIGHT_SCALE_FACTOR 50000
#define MAP_SCALE_FACTOR 100
#define M_PI 3.1415926535897932384626433832795
in vec2 l_texcoord0;
in vec4 vtxPosWorld;
in vec3 viewVector;
uniform vec3 sunPosition;
uniform vec3 cameraPosition;
uniform sampler2D heightmap;
float G1V(float dotP, float k)
{
return 1.0f/(dotP*(1.0f-k)+k);
}
float calcBRDF(vec3 normal, float fresnel, float MFD, vec3 sunColor) {
float F = fresnel;
vec3 Nn = normalize(normal.xyz);
vec3 Vn = viewVector;
vec3 Ln = lightDirection;
vec3 Hn = normalize(viewVector + lightDirection);
float NdotV = max(dot(Nn,Vn),0.0);
float NdotL = max(dot(Nn,Ln),0.0);
float NdotH = max(dot(Nn,Hn),0.1);
float VdotH = max(dot(Vn,Hn),0.0);
float LdotH = max(dot(Ln,Hn),0.0);
// Microfacet Distribution
float denom, alpha, beckmannD, GGXD;
float NdotHSqr = NdotH * NdotH;
float alphaSqr = MFD*MFD;
// GGX distribution (better performance)
denom = NdotHSqr * ( alphaSqr-1.0 ) + 1.0f;
GGXD = alphaSqr/(M_PI * pow(denom,2));
float k = MFD/2.0f;
float GGX = G1V(NdotL,k) * G1V(NdotV,k);
return GGXSpecular = F * GGXD * GGX;
}
float calcFresnel(float R) {
vec3 Hn = normalize(viewVector + lightDirection);
vec3 Vn = viewVector;
vec3 Ln = lightDirection;
float VdotH = dot(Vn,Hn);
float NdotL = dot(Hn,Vn);
float fresnel = R + (1-R)*pow((1-NdotL),5);
return fresnel;
}
vec3 calcNormal(sampler2D heightmap, vec2 texcoord) {
const vec2 size = vec2(MAP_SCALE_FACTOR,0.0);
vec3 off = ivec3(1,0,1)/TERRAIN_SIZE;
float hL = texture2D(heightmap, texcoord - off.xy).x*HEIGHT_SCALE_FACTOR;
float hR = texture2D(heightmap, texcoord + off.xy).x*HEIGHT_SCALE_FACTOR;
float hD = texture2D(heightmap, texcoord - off.yz).x*HEIGHT_SCALE_FACTOR;
float hU = texture2D(heightmap, texcoord + off.yz).x*HEIGHT_SCALE_FACTOR;
vec3 va = normalize(vec3(size.xy,(hL-hR)));
vec3 vb = normalize(vec3(size.yx,(hD-hU)));
return vec3(1.0,1.0,1.0);
return normalize(cross(va,vb)/2 + 0.5);
}
void main()
{
vec3 normal = calcNormal(heightmap, l_texcoord0);
float N = 1.69;
float microFacetDistribution = 1.5;
vec3 sunColor = vec3(1.0,1.0,1.0);
float Rfactor = calcFresnelReflectance(N);
float fresnel = calcFresnel(Rfactor);
float brdf = calcBRDF(normal,fresnel,microFacetDistribution,sunColor);
float conservedBrdf = clamp(brdf,0.0,fresnel);
gl_FragColor.rgb = vec4(0.5,0.5,0.5,1.0)*conservedBrdf;
}
I've tried using viewspace, worldspace etc.. It seems like a simple/silly problem, but I can't figure it out :|
Any suggestions appreciated..
Of course the answer was something silly.
First of all the normals were incorrect. That caused a skew in the light direction which made the light appear to be hitting on one direction only.
Secondly the light direction itself needed to be negated.
I am writing simple deferred rendering 'engine' for fun and encountered a strange behaviour with GLSL. Calling a function caused a slight malfunction while simply pasting the content of the function solved the problem (details below).
Am I doing something terribly wrong or is it possible that I hit a limitation or a glsl compiler bug (14.12 AMD Catalyst Omega drivers) ?
Original call (in a for loop over shadow):
ColorOut.rgb += phong(position.xyz, normal, color, Shadows[shadow].position.xyz, Shadows[shadow].color.rgb);
Using this, the color of my shadow casting lights are all the same, Shadows[shadow].color.rgb seem to always be equal the to first one of the array (in the phong function at least).
My 'solution' was to replace the call by the content of the function:
vec3 L = normalize(Shadows[shadow].position.xyz - position.xyz);
float dNL = dot(normal, L);
float diffuseFactor = max(dNL, minDiffuse);
vec3 V = normalize(cameraPosition - position.xyz);
vec3 R = normalize(reflect(-L, normal));
float specularFactor = pow(max(dot(R, V), 0.f), 8.0);
ColorOut.rgb += diffuseFactor * color * Shadows[shadow].color.rgb + specularFactor * Shadows[shadow].color.rgb;
With this, everything works fine.
Other related code snippets, shadow casting lights data structure:
layout(std140, binding = 2) uniform ShadowBlock
{
vec4 position;
vec4 color;
mat4 depthMVP;
} Shadows[8];
The phong function:
vec3 phong(vec3 p, vec3 N, vec3 diffuse, vec3 lp, vec3 lc)
{
vec3 L = normalize(lp - p);
float dNL = dot(N, L);
float diffuseFactor = max(dNL, minDiffuse);
vec3 V = normalize(cameraPosition - p);
vec3 R = normalize(reflect(-L, N));
float specularFactor = pow(max(dot(R, V), 0.f), 8.0);
return diffuseFactor * diffuse * lc + specularFactor * lc;
}
I came across this FxAA shader that does anti-aliasing and seems to be working quite well.
But, Somehow could not understand the logic. Can someone explain?
[[FX]]
// Samplers
sampler2D buf0 = sampler_state {
Address = Clamp;
Filter = None;
};
context FXAA {
VertexShader = compile GLSL VS_FSQUAD;
PixelShader = compile GLSL FS_FXAA;
}
[[VS_FSQUAD]]
uniform mat4 projMat;
attribute vec3 vertPos;
varying vec2 texCoords;
void main(void) {
texCoords = vertPos.xy;
gl_Position = projMat * vec4( vertPos, 1 );
}
[[FS_FXAA]]
uniform sampler2D buf0;
uniform vec2 frameBufSize;
varying vec2 texCoords;
void main( void ) {
//gl_FragColor.xyz = texture2D(buf0,texCoords).xyz;
//return;
float FXAA_SPAN_MAX = 8.0;
float FXAA_REDUCE_MUL = 1.0/8.0;
float FXAA_REDUCE_MIN = 1.0/128.0;
vec3 rgbNW=texture2D(buf0,texCoords+(vec2(-1.0,-1.0)/frameBufSize)).xyz;
vec3 rgbNE=texture2D(buf0,texCoords+(vec2(1.0,-1.0)/frameBufSize)).xyz;
vec3 rgbSW=texture2D(buf0,texCoords+(vec2(-1.0,1.0)/frameBufSize)).xyz;
vec3 rgbSE=texture2D(buf0,texCoords+(vec2(1.0,1.0)/frameBufSize)).xyz;
vec3 rgbM=texture2D(buf0,texCoords).xyz;
vec3 luma=vec3(0.299, 0.587, 0.114);
float lumaNW = dot(rgbNW, luma);
float lumaNE = dot(rgbNE, luma);
float lumaSW = dot(rgbSW, luma);
float lumaSE = dot(rgbSE, luma);
float lumaM = dot(rgbM, luma);
float lumaMin = min(lumaM, min(min(lumaNW, lumaNE), min(lumaSW, lumaSE)));
float lumaMax = max(lumaM, max(max(lumaNW, lumaNE), max(lumaSW, lumaSE)));
vec2 dir;
dir.x = -((lumaNW + lumaNE) - (lumaSW + lumaSE));
dir.y = ((lumaNW + lumaSW) - (lumaNE + lumaSE));
float dirReduce = max(
(lumaNW + lumaNE + lumaSW + lumaSE) * (0.25 * FXAA_REDUCE_MUL),
FXAA_REDUCE_MIN);
float rcpDirMin = 1.0/(min(abs(dir.x), abs(dir.y)) + dirReduce);
dir = min(vec2( FXAA_SPAN_MAX, FXAA_SPAN_MAX),
max(vec2(-FXAA_SPAN_MAX, -FXAA_SPAN_MAX),
dir * rcpDirMin)) / frameBufSize;
vec3 rgbA = (1.0/2.0) * (
texture2D(buf0, texCoords.xy + dir * (1.0/3.0 - 0.5)).xyz +
texture2D(buf0, texCoords.xy + dir * (2.0/3.0 - 0.5)).xyz);
vec3 rgbB = rgbA * (1.0/2.0) + (1.0/4.0) * (
texture2D(buf0, texCoords.xy + dir * (0.0/3.0 - 0.5)).xyz +
texture2D(buf0, texCoords.xy + dir * (3.0/3.0 - 0.5)).xyz);
float lumaB = dot(rgbB, luma);
if((lumaB < lumaMin) || (lumaB > lumaMax)){
gl_FragColor.xyz=rgbA;
}else{
gl_FragColor.xyz=rgbB;
}
}
FxAA is a filter algorithm that performs antialiasing on images. In contrary to other AA techniques it is applied on the pixels of an image, not while drawing it's primitives. In 3D applications like games it is applied as a post processing step on top of the rendered scene.
The basic idea is: Look for vertical and horizontal edges. Blur in orthogonal direction if at the end of the edge.
Here's a good description and the original paper on the topic.