Strange Voxel Cone Tracing Results - c++

Im currently in the process of writing a Voxel Cone Tracing Rendering Engine with C++ and OpenGL. Everything is going rather fine, except that I'm getting rather strange results for wider cone angles.
Right now, for the purposes of testing, all I am doing is shoot out one singular cone perpendicularly to the fragment normal. I am only calculating 'indirect light'. For reference, here is the rather simple Fragment Shader I'm using:
#version 450 core
out vec4 FragColor;
in vec3 pos_fs;
in vec3 nrm_fs;
uniform sampler3D tex3D;
vec3 indirectDiffuse();
vec3 voxelTraceCone(const vec3 from, vec3 direction);
void main()
{
FragColor = vec4(0, 0, 0, 1);
FragColor.rgb += indirectDiffuse();
}
vec3 indirectDiffuse(){
// singular cone in direction of the normal
vec3 ret = voxelTraceCone(pos_fs, nrm);
return ret;
}
vec3 voxelTraceCone(const vec3 origin, vec3 dir) {
float max_dist = 1f;
dir = normalize(dir);
float current_dist = 0.01f;
float apperture_angle = 0.01f; //Angle in Radians.
vec3 color = vec3(0.0f);
float occlusion = 0.0f;
float vox_size = 128.0f; //voxel map size
while(current_dist < max_dist && occlusion < 1) {
//Get cone diameter (tan = cathetus / cathetus)
float current_coneDiameter = 2.0f * current_dist * tan(apperture_angle * 0.5f);
//Get mipmap level which should be sampled according to the cone diameter
float vlevel = log2(current_coneDiameter * vox_size);
vec3 pos_worldspace = origin + dir * current_dist;
vec3 pos_texturespace = (pos_worldspace + vec3(1.0f)) * 0.5f; //[-1,1] Coordinates to [0,1]
vec4 voxel = textureLod(tex3D, pos_texturespace, vlevel); //get voxel
vec3 color_read = voxel.rgb;
float occlusion_read = voxel.a;
color = occlusion*color + (1 - occlusion) * occlusion_read * color_read;
occlusion = occlusion + (1 - occlusion) * occlusion_read;
float dist_factor = 0.3f; //Lower = better results but higher performance hit
current_dist += current_coneDiameter * dist_factor;
}
return color;
}
The tex3D uniform is the voxel 3d-texture.
Under a regular Phong shader (under which the voxel values are calculated) the scene looks like this:
For reference, this is what the voxel map (tex3D) (128x128x128) looks like when visualized:
Now we get to the actual problem I'm having. If I apply the shader above to the scene, I get following results:
For very small cone angles (apperture_angle=0.01) I get roughly what you might expect: The voxelized scene is essentially 'reflected' perpendicularly on each surface:
Now if I increase the apperture angle to, for example 30 degrees (apperture_angle=0.52), I get this really strange 'wavy'-looking result:
I would have expected a much more similar result to the earlier one, just less specular. Instead I get mostly the outline of each object reflected in a specular manner with some occasional pixels inside the outline. Considering this is meant to be the 'indirect lighting' in the scene, it won't look exactly good even if I add the direct light.
I have tried different values for max_dist, current_dist etc. aswell as shooting several cones instead of just one. The result remains similar, if not worse.
Does someone know what I'm doing wrong here, and how to get actual remotely realistic indirect light?
I suspect that the textureLod function somehow yields the wrong result for any LOD levels above 0, but I haven't been able to confirm this.

The Mipmaps of the 3D texture were not being generated correctly.
In addition there was no hardcap on vlevel leading to all textureLod calls returning a #000000 color that accessed any mipmaplevel above 1.

Related

How to handle lightning (ambient, diffuse, specular) for point spheres in openGL

Initial situation
I want to visualize simulation data in openGL.
My data consists of particle positions (x, y, z) where each particle has some properties (like density, temperature, ...) which will be used for coloring. Those (SPH) particles (100k to several millions), grouped together, actually represent planets, in case you wonder. I want to render those particles as small 3D spheres and add ambient, diffuse and specular lighting.
Status quo and questions
In MY case: In which coordinate frame do I do the lightning calculations? Which way is the "best" to pass the various components through the pipeline?
I saw that it is common to do it in view space which is also very intuitive. However: The normals at the different fragment positions are calculated in the fragment shader in clip space coordinates (see appended fragment shader). Can I actually convert them "back" into view space to do the lightning calculations in view space for all the fragments? Would there be any advantage compared to doing it in clip space?
It would be easier to get the normals in view space if I would use meshes for each sphere but I think with several million particles this would decrease performance drastically, so better do it with sphere intersection, would you agree?
PS: I don't need a model matrix since all the particles are already in place.
//VERTEX SHADER
#version 330 core
layout (location = 0) in vec3 position;
layout (location = 2) in float density;
uniform float radius;
uniform vec3 lightPos;
uniform vec3 viewPos;
out vec4 lightDir;
out vec4 viewDir;
out vec4 viewPosition;
out vec4 posClip;
out float vertexColor;
// transformation matrices
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
lightDir = projection * view * vec4(lightPos - position, 1.0f);
viewDir = projection * view * vec4(viewPos - position, 1.0f);
viewPosition = projection * view * vec4(lightPos, 1.0f);
posClip = projection * view * vec4(position, 1.0f);
gl_Position = posClip;
gl_PointSize = radius;
vertexColor = density;
}
I know that projective divion happens for the gl_Position variable, does that actually happen to ALL vec4's which are passed from the vertex to the fragment shader? If not, maybe the calculations in the fragment shader would be wrong?
And the fragment shader where the normals and diffuse/specular lightning calculations in clip space:
//FRAGMENT SHADER
#version 330 core
in float vertexColor;
in vec4 lightDir;
in vec4 viewDir;
in vec4 posClip;
in vec4 viewPosition;
uniform vec3 lightColor;
vec4 colormap(float x); // returns vec4(r, g, b, a)
out vec4 vFragColor;
void main(void)
{
// AMBIENT LIGHT
float ambientStrength = 0.0;
vec3 ambient = ambientStrength * lightColor;
// Normal calculation done in clip space (first from texture (gl_PointCoord 0 to 1) coord to NDC( -1 to 1))
vec3 normal;
normal.xy = gl_PointCoord * 2.0 - vec2(1.0); // transform from 0->1 point primitive coords to NDC -1->1
float mag = dot(normal.xy, normal.xy); // sqrt(x=1) = sqrt(x)
if (mag > 1.0) // discard fragments outside sphere
discard;
normal.z = sqrt(1.0 - mag); // because x^2 + y^2 + z^2 = 1
// DIFFUSE LIGHT
float diff = max(0.0, dot(vec3(lightDir), normal));
vec3 diffuse = diff * lightColor;
// SPECULAR LIGHT
float specularStrength = 0.1;
vec3 viewDir = normalize(vec3(viewPosition) - vec3(posClip));
vec3 reflectDir = reflect(-vec3(lightDir), normal);
float shininess = 64;
float spec = pow(max(dot(vec3(viewDir), vec3(reflectDir)), 0.0), shininess);
vec3 specular = specularStrength * spec * lightColor;
vFragColor = colormap(vertexColor / 8) * vec4(ambient + diffuse + specular, 1);
}
Now this actually "kind of" works but i have the feeling that also the sides of the sphere which do NOT face the light source are being illuminated, which shouldn't happen. How can I fix this?
Some weird effect: In this moment the light source is actually BEHIND the left planet (it just peaks out a little bit at the top left), bit still there are diffuse and specular effects going on. This side should be actually pretty dark! =(
Also at this moment I get some glError: 1282 error in the fragment shader and I don't know where it comes from since the shader program actually compiles and runs, any suggestions? :)
The things that you are drawing aren't actually spheres. They just look like them from afar. This is absolutely ok if you are fine with that. If you need geometrically correct spheres (with correct sizes and with a correct projection), you need to do proper raycasting. This seems to be a comprehensive guide on this topic.
1. What coordinate system?
In the end, it is up to you. The coordinate system just needs to fulfill some requirements. It must be angle-preserving (because lighting is all about angles). And if you need distance-based attenuation, it should also be distance-preserving. The world and the view coordinate systems usually fulfill these requirements. Clip space is not suited for lighting calculations as neither angles nor distances are preserved. Furthermore, gl_PointCoord is in none of the usual coordinate systems. It is its own coordinate system and you should only use it together with other coordinate systems if you know their relation.
2. Meshes or what?
Meshes are absolutely not suited to render spheres. As mentioned above, raycasting or some screen-space approximation are better choices. Here is an example shader that I used in my projects:
#version 330
out vec4 result;
in fData
{
vec4 toPixel; //fragment coordinate in particle coordinates
vec4 cam; //camera position in particle coordinates
vec4 color; //sphere color
float radius; //sphere radius
} frag;
uniform mat4 p; //projection matrix
void main(void)
{
vec3 v = frag.toPixel.xyz - frag.cam.xyz;
vec3 e = frag.cam.xyz;
float ev = dot(e, v);
float vv = dot(v, v);
float ee = dot(e, e);
float rr = frag.radius * frag.radius;
float radicand = ev * ev - vv * (ee - rr);
if(radicand < 0)
discard;
float rt = sqrt(radicand);
float lambda = max(0, (-ev - rt) / vv); //first intersection on the ray
float lambda2 = (-ev + rt) / vv; //second intersection on the ray
if(lambda2 < lambda) //if the first intersection is behind the camera
discard;
vec3 hit = lambda * v; //intersection point
vec3 normal = (frag.cam.xyz + hit) / frag.radius;
vec4 proj = p * vec4(hit, 1); //intersection point in clip space
gl_FragDepth = ((gl_DepthRange.diff * proj.z / proj.w) + gl_DepthRange.near + gl_DepthRange.far) / 2.0;
vec3 vNormalized = -normalize(v);
float nDotL = dot(vNormalized, normal);
vec3 c = frag.color.rgb * nDotL + vec3(0.5, 0.5, 0.5) * pow(nDotL, 120);
result = vec4(c, frag.color.a);
}
3. Perspective division
Perspective division is not applied to your attributes. The GPU does perspective division on the data that you pass via gl_Position on the way to transforming them to screen space. But you will never actually see this perspective-divided position unless you do it yourself.
4. Light in the dark
This might be the result of you mixing different coordinate systems or doing lighting calculations in clip space. Btw, the specular part is usually not multiplied by the material color. This is light that gets reflected directly at the surface. It does not penetrate the surface (which would absorb some colors depending on the material). That's why those highlights are usually white (or whatever light color you have), even on black objects.

OpenGL Computing Normals and TBN Matrix from Depth Buffer (SSAO implementation)

I'm implementing SSAO in OpenGL, following this tutorial: Jhon Chapman SSAO
Basically the technique described uses an Hemispheric kernel which is oriented along the fragment's normal. The view space z position of the sample is then compared to its screen space depth buffer value.
If the value in the depth buffer is higher, it means the sample ended up in a geometry so this fragment should be occluded.
The goal of this technique is to get rid of the classic implementation artifact where objects flat faces are greyed out.
I've have the same implementation with 2 small differencies
I'm not using a Noise texture to rotate my kernel, so I have banding artifacts, that's fine for now
I don't have access to a buffer with Per-pixel normals, so I have to compute my normal and TBN matrix only using the depth buffer.
The algorithm seems to be working fine, I can see the fragments being occluded, BUT I still have my faces greyed out...
IMO it's coming from the way I'm calculating my TBN matrix. The normals look OK but something must be wrong as my kernel doesn't seem to be properly aligned causing samples to end up in the faces.
Screenshots are with a Kernel of 8 samples and a radius of .1. the first is only the result of SSAO pass and the second one is the debug render of the generated normals.
Here is the code for the function that computes the Normal and TBN Matrix
mat3 computeTBNMatrixFromDepth(in sampler2D depthTex, in vec2 uv)
{
// Compute the normal and TBN matrix
float ld = -getLinearDepth(depthTex, uv);
vec3 x = vec3(uv.x, 0., ld);
vec3 y = vec3(0., uv.y, ld);
x = dFdx(x);
y = dFdy(y);
x = normalize(x);
y = normalize(y);
vec3 normal = normalize(cross(x, y));
return mat3(x, y, normal);
}
And the SSAO shader
#include "helper.glsl"
in vec2 vertTexcoord;
uniform sampler2D depthTex;
const int MAX_KERNEL_SIZE = 8;
uniform vec4 gKernel[MAX_KERNEL_SIZE];
// Kernel Radius in view space (meters)
const float KERNEL_RADIUS = .1;
uniform mat4 cameraProjectionMatrix;
uniform mat4 cameraProjectionMatrixInverse;
out vec4 FragColor;
void main()
{
// Get the current depth of the current pixel from the depth buffer (stored in the red channel)
float originDepth = texture(depthTex, vertTexcoord).r;
// Debug linear depth. Depth buffer is in the range [1.0];
float oLinearDepth = getLinearDepth(depthTex, vertTexcoord);
// Compute the view space position of this point from its depth value
vec4 viewport = vec4(0,0,1,1);
vec3 originPosition = getViewSpaceFromWindow(cameraProjectionMatrix, cameraProjectionMatrixInverse, viewport, vertTexcoord, originDepth);
mat3 lookAt = computeTBNMatrixFromDepth(depthTex, vertTexcoord);
vec3 normal = lookAt[2];
float occlusion = 0.;
for (int i=0; i<MAX_KERNEL_SIZE; i++)
{
// We align the Kernel Hemisphere on the fragment normal by multiplying all samples by the TBN
vec3 samplePosition = lookAt * gKernel[i].xyz;
// We want the sample position in View Space and we scale it with the kernel radius
samplePosition = originPosition + samplePosition * KERNEL_RADIUS;
// Now we need to get sample position in screen space
vec4 sampleOffset = vec4(samplePosition.xyz, 1.0);
sampleOffset = cameraProjectionMatrix * sampleOffset;
sampleOffset.xyz /= sampleOffset.w;
// Now to get the depth buffer value at the projected sample position
sampleOffset.xyz = sampleOffset.xyz * 0.5 + 0.5;
// Now can get the linear depth of the sample
float sampleOffsetLinearDepth = -getLinearDepth(depthTex, sampleOffset.xy);
// Now we need to do a range check to make sure that object
// outside of the kernel radius are not taken into account
float rangeCheck = abs(originPosition.z - sampleOffsetLinearDepth) < KERNEL_RADIUS ? 1.0 : 0.0;
// If the fragment depth is in front so it's occluding
occlusion += (sampleOffsetLinearDepth >= samplePosition.z ? 1.0 : 0.0) * rangeCheck;
}
occlusion = 1.0 - (occlusion / MAX_KERNEL_SIZE);
FragColor = vec4(vec3(occlusion), 1.0);
}
Update 1
This variation of the TBN calculation function gives the same results
mat3 computeTBNMatrixFromDepth(in sampler2D depthTex, in vec2 uv)
{
// Compute the normal and TBN matrix
float ld = -getLinearDepth(depthTex, uv);
vec3 a = vec3(uv, ld);
vec3 x = vec3(uv.x + dFdx(uv.x), uv.y, ld + dFdx(ld));
vec3 y = vec3(uv.x, uv.y + dFdy(uv.y), ld + dFdy(ld));
//x = dFdx(x);
//y = dFdy(y);
//x = normalize(x);
//y = normalize(y);
vec3 normal = normalize(cross(x - a, y - a));
vec3 first_axis = cross(normal, vec3(1.0f, 0.0f, 0.0f));
vec3 second_axis = cross(first_axis, normal);
return mat3(normalize(first_axis), normalize(second_axis), normal);
}
I think the problem is probably that you are mixing coordinate systems. You are using texture coordinates in combination with the linear depth. You can imagine two vertical surfaces facing slightly to the left of the screen. Both have the same angle from the vertical plane and should thus have the same normal right?
But let's then imagine that one of these surfaces are much further from the camera. Since fFdx/fFdy functions basically tell you the difference from the neighbor pixel, the surface far away from the camera will have greater linear depth difference over one pixel, than the surface close to the camera. But the uv.x / uv.y derivative will have the same value. That means that you will get different normals depending on the distance from the camera.
The solution is to calculate the view coordinate and use the derivative of that to calculate the normal.
vec3 viewFromDepth(in sampler2D depthTex, in vec2 uv, in vec3 view)
{
float ld = -getLinearDepth(depthTex, uv);
/// I assume ld is negative for fragments in front of the camera
/// not sure how getLinearDepth is implemented
vec3 z_scaled_view = (view / view.z) * ld;
return z_scaled_view;
}
mat3 computeTBNMatrixFromDepth(in sampler2D depthTex, in vec2 uv, in vec3 view)
{
vec3 view = viewFromDepth(depthTex, uv);
vec3 view_normal = normalize(cross(dFdx(view), dFdy(view)));
vec3 first_axis = cross(view_normal, vec3(1.0f, 0.0f, 0.0f));
vec3 second_axis = cross(first_axis, view_normal);
return mat3(view_normal, normalize(first_axis), normalize(second_axis));
}

Triplanar texturing in glsl

I followed a paper called "GPU Based Algorithms for Terrain Texturing" and it says the following:
The main algorithm to apply triplanar texturing is fairly simple.
First, we check whether the slope is relatively large in the same way
that we do with slope based texturing. These regions with high slope
will be the only regions aected by the algorithm. We then check what
the larger component of the normal is, out of x and z. If x is the
larger component, we use the geometry z coordinate as the texture
coordinate s, and the geometry y coordinate as the texture coordinate
t. If z is the larger component, we use the geometry x coordinate as
the texture coordinate s, and the geometry y coordinate as the texture
coordinate t.
So I tried to implement it. This is my heightmap:
Note that I added white lines in the borders just for the experiment, so now I have maximum-height walls surrounding my map.
Now following the articles, here's the implementation in the vertex shader:
#version 430
uniform mat4 ProjectionMatrix;
uniform mat4 CameraMatrix;
uniform vec3 scale;
layout(location = 0) in vec3 vertex;
layout(location = 1) in vec3 normal;
out vec3 fsVertex;
out vec3 fsNormal;
out vec2 fsUvs;
void main()
{
fsVertex = vertex;
fsNormal = normalize(normal);
if(fsNormal.y < 0.75) {
if(fsNormal.x > fsNormal.z)
fsUvs = vertex.zy * scale.zy;
else
fsUvs = vertex.xy * scale.xy;
}
else
fsUvs = vertex.xz * scale.xz;
gl_Position = ProjectionMatrix * CameraMatrix * vec4(vertex * scale, 1.0);
}
Here's the fragment shader, if it helps.
This is what I get:
Here's a further look, for proportion.
The top and left walls (of the heightmap) are rendered ok, and the bottom and right walls still suffer from stretching. I also get these weird stretches spots next to the beginning of the walls.
What could be the cause of this?
If you want to check if the normal's x or z coordinate are longer, you should use the abs function:
if(abs(fsNormal.x) > abs(fsNormal.z))
Furthermore, the y > 0.75 seems like a coarse approximation, which is probably good enough in most cases. Actually, the maximum of abs(x), abs(y), abs(z) gives you the correct plane.
Here is a DX11/HLSL implementation i used. GLSL conversion should be easy.
With the exponent value you can tune the blending speed at the borders. i used something like 3.
float3 SampleTriplanarTexture(Texture2D<float4> tex1, Texture2D<float4> tex2, Texture2D<float4> tex3, float3 normal, float3 pos, float exponent)
{
//triplanar projection
float mXY = pow(abs(normal.z), exponent);
float mXZ = pow(abs(normal.y), exponent);
float mYZ = pow(abs(normal.x), exponent);
float total = 1.0f / (mXY + mXZ + mYZ);
mXY *= total;
mXZ *= total;
mYZ *= total;
return tex1.SampleLevel(linearSampler2, pos.xz, 0) * mXZ +
tex2.SampleLevel(linearSampler2, pos.xy, 0) * mXY +
tex3.SampleLevel(linearSampler2, pos.yz, 0) * mYZ;
}

Parallax mapping - only works in one direction

I'm working on parallax mapping (from this tutorial: http://sunandblackcat.com/tipFullView.php?topicid=28) and I seem to only get good results when I move along one axis (e.g. left-to-right) while looking at a parallaxed quad. The image below illustrates this:
You can see it clearly at the left and right steep edges. If I'm moving to the right the right steep edge should have less width than the left one (which looks correct on the left image) [Camera is at right side of cube]. However, if I move along a different axis (instead of west to east I now move top to bottom) you can see that this time the steep edges are incorrect [Camera is again on right side of cube].
I'm using the most simple form of parallax mapping and even that has the same problems. The fragment shader looks like this:
void main()
{
vec2 texCoords = fs_in.TexCoords;
vec3 viewDir = normalize(viewPos - fs_in.FragPos);
vec3 V = normalize(fs_in.TBN * viewDir);
vec3 L = normalize(fs_in.TBN * lightDir);
float height = texture(texture_height, texCoords).r;
float scale = 0.2;
vec2 texCoordsOffset = scale * V.xy * height;
texCoords += texCoordsOffset;
// calculate diffuse lighting
vec3 N = texture(texture_normal, texCoords).rgb * 2.0 - 1.0;
N = normalize(N); // normal already in tangent-space
vec3 ambient = vec3(0.2f);
float diff = clamp(dot(N, L), 0, 1);
vec3 diffuse = texture(texture_diffuse, texCoords).rgb * diff;
vec3 R = reflect(L, N);
float spec = pow(max(dot(R, V), 0.0), 32);
vec3 specular = vec3(spec);
fragColor = vec4(ambient + diffuse + specular, 1.0);
}
TBN matrix is created as follows in the vertex shader:
vs_out.TBN = transpose(mat3(normalize(tangent), normalize(bitangent), normalize(vs_out.Normal)));
I use the transpose of the TBN to transform all relevant vectors to tangent space. Without offsetting the TexCoords, the lighting looks solid with normal mapped texture so my guess is that it's not the TBN matrix that's causing the issues. What could be causing this that it only works in one direction?
edit
Interestingly, If I invert the y coordinate of the TexCoords input variable parallax mapping seems to work. I have no idea why this works though and I need it to work without the inversion.
vec2 texCoords = vec2(fs_in.TexCoords.x, 1.0 - fs_in.TexCoords.y);

How to improve the quality of my shadows?

First, a screenshot:
As you can see, the tops of the shadows look OK (if you look at the dirt where the tops of the shrubs are projected, it looks more or less correct), but the base of the shadows is way off.
The bottom left corner of the image shows the shadow map I computed. It's a depth-map from the POV of the light, which is also where my character is standing.
Here's another shot, from a different angle:
Any ideas what might be causing it to come out like this? Is the depth of the shrub face too similar to the depth of the ground directly behind it, perhaps? If so, how do I get around that?
I'll post the fragment shader below, leave a comment if there's anything else you need to see.
Fragment Shader
#version 330
in vec2 TexCoord0;
in vec3 Tint0;
in vec4 WorldPos;
in vec4 LightPos;
out vec4 FragColor;
uniform sampler2D TexSampler;
uniform sampler2D ShadowSampler;
uniform bool Blend;
const int MAX_LIGHTS = 16;
uniform int NumLights;
uniform vec3 Lights[MAX_LIGHTS];
const float lightRadius = 100;
float distSq(vec3 v1, vec3 v2) {
vec3 d = v1-v2;
return dot(d,d);
}
float CalcShadowFactor(vec4 LightSpacePos)
{
vec3 ProjCoords = LightSpacePos.xyz / LightSpacePos.w;
vec2 UVCoords;
UVCoords.x = 0.5 * ProjCoords.x + 0.5;
UVCoords.y = 0.5 * ProjCoords.y + 0.5;
float Depth = texture(ShadowSampler, UVCoords).x;
if (Depth < (ProjCoords.z + 0.0001))
return 0.5;
else
return 1.0;
}
void main()
{
float scale;
FragColor = texture2D(TexSampler, TexCoord0.xy);
// transparency
if(!Blend && FragColor.a < 0.5) discard;
// biome blending
FragColor *= vec4(Tint0, 1.0f);
// fog
float depth = gl_FragCoord.z / gl_FragCoord.w;
if(depth>20) {
scale = clamp(1.2-15/(depth-19),0,1);
vec3 destColor = vec3(0.671,0.792,1.00);
vec3 colorDist = destColor - FragColor.xyz;
FragColor.xyz += colorDist*scale;
}
// lighting
scale = 0.30;
for(int i=0; i<NumLights; ++i) {
float dist = distSq(WorldPos.xyz, Lights[i]);
if(dist < lightRadius) {
scale += (lightRadius-dist)/lightRadius;
}
}
scale *= CalcShadowFactor(LightPos);
FragColor.xyz *= clamp(scale,0,1.5);
}
I'm fairly certain this is an offset problem. My shadows look to be about 1 block off, but I can't figure out how to shift them, nor what's causing them to be off.
Looks like "depth map bias" actually:
Not exactly sure how to set this....do I just call glPolygonOffset before rendering the scene? Will try it...
Setting glPolygonOffset to 100,100 amplifies the problem:
I set this just before rendering the shadow map:
GL.Enable(EnableCap.PolygonOffsetFill);
GL.PolygonOffset(100f, 100.0f);
And then disabled it again. I'm not sure if that's how I'm supposed to do it. Increasing the values amplifies the problem....decreasing them to below 1 doesn't seem to improve it though.
Notice also how the shadow map in the lower left changed.
Vertex Shader
#version 330
layout(location = 0) in vec3 Position;
layout(location = 1) in vec2 TexCoord;
layout(location = 2) in mat4 Transform;
layout(location = 6) in vec4 TexSrc; // x=x, y=y, z=width, w=height
layout(location = 7) in vec3 Tint; // x=R, y=G, z=B
uniform mat4 ProjectionMatrix;
uniform mat4 LightMatrix;
out vec2 TexCoord0;
out vec3 Tint0;
out vec4 WorldPos;
out vec4 LightPos;
void main()
{
WorldPos = Transform * vec4(Position, 1.0);
gl_Position = ProjectionMatrix * WorldPos;
LightPos = LightMatrix * WorldPos;
TexCoord0 = vec2(TexSrc.x+TexCoord.x*TexSrc.z, TexSrc.y+TexCoord.y*TexSrc.w);
Tint0 = Tint;
}
While world-aligned cascaded shadow maps are great and used in most new games out there, it's not related to why your shadows have a strange offset with your current implementation.
Actually, it looks like you're sampling from the correct texels on the shadow map just on where the shadows that are occurring are exactly where you'd expect them to be, however your comparison is off.
I've added some comments to your code:
vec3 ProjCoords = LightSpacePos.xyz / LightSpacePos.w; // So far so good...
vec2 UVCoords;
UVCoords.x = 0.5 * ProjCoords.x + 0.5; // Right, you're converting X and Y from clip
UVCoords.y = 0.5 * ProjCoords.y + 0.5; // space to texel space...
float Depth = texture(ShadowSampler, UVCoords).x; // I expect we sample a value in [0,1]
if (Depth < (ProjCoords.z + 0.0001)) // Uhoh, we never converted Z's clip space to [0,1]
return 0.5;
else
return 1.0;
So, I suspect you want to compare to ProjCoords.z * 0.5 + 0.5:
if (Depth < (ProjCoords.z * 0.5 + 0.5 + 0.0001))
return 0.5;
else
return 1.0;
Also, that bias factor makes me nervous. Better yet, just take it out for now and deal with it once you get the shadows appearing in the right spots:
const float bias = 0.0;
if (Depth < (ProjCoords.z * 0.5 + 0.5 + bias))
return 0.5;
else
return 1.0;
I might not be entirely right about how to transform ProjCoords.z to match the sampled value, however this is likely the issue. Also, if you do move to cascaded shadow maps (I recommend world-aligned) I'd strongly recommend drawing frustums representing where each shadow map is viewing -- it makes debugging a whole lot easier.
This is called the "deer in headlights" effect of buffer mapped shadows. There are a several ways to minimize this effect. Look for "light space shadow mapping".
NVidia OpenGL SDK has "cascaded shadow maps" example. You might want to check it out (haven't used it myself, though).
How to improve the quality of my shadows?
The problem could be caused by using incorrect matrix while rendering shadows. Your example doesn't demonstrate how light matrices are set. By murphy's law I'll have to assume that bug lies in this missing piece of code - since you decided that this part isn't important, it probably causes the problem. If matrix used while testing the shadow is different from matrix used to render the shadow, you'll get exactly this problem.
I suggest to forget about the whole minecraft thing for a moment, and play around with shadows in simple application. Make a standalone application with floor plane and rotating cube (or teapot or whatever you want), and debug shadow maps there, until you get hang of it. Since you're willing to throw +100 bounty onto the question, you might as well post the complete code of your standalone sample here - if you still get the problem in the sample. Trying to stick technology you aren't familiar with into the middle of working(?) engine isn't a good idea anyway. Take it slow, get used to the technique/technology/effect, then integrate it.