Stable Shadow Mapping - c++

I'm trying to stabilise the shadows in my 3D renderer. I'm using CSMs.
Here's the code I've got, making no attempt to stabilise. The size of the projection in world space should at least be constant though:
void SkyLight::update() {
// direction is the direction that the light is facing
vec3 tangent = sq::make_tangent(direction);
for (int i = 0; i < 4; i++) {
// .first is the far plane, .second is a struct of 8 vec3 making a world space camera frustum
const std::pair<float, sq::Frustum>& csm = camera->csmArr[i];
// calculates the bounding box centre of the frustum
vec3 frusCentre = sq::calc_frusCentre(csm.second);
mat4 viewMat = glm::lookAt(frusCentre-direction, frusCentre, tangent);
mat4 projMat = glm::ortho(-csm.first, csm.first, -csm.first, csm.first, -csm.first, csm.first);
matArr[i] = projMat * viewMat;
}
}
That works. But, the shadows flicker and swim like mad. So, here's an attempt at stabilising, by trying to snap them to texel-sized increments, as recommended everywhere but seemingly never explained:
void SkyLight::update() {
vec3 tangent = sq::make_tangent(direction);
for (int i = 0; i < 4; i++) {
const std::pair<float, sq::Frustum>& csm = camera->csmArr[i];
vec3 frusCentre = sq::calc_frusCentre(csm.second);
double qStep = csm.first / 1024.0; // shadow texture size
frusCentre.x = std::round(frusCentre.x / qStep) * qStep;
frusCentre.y = std::round(frusCentre.y / qStep) * qStep;
frusCentre.z = std::round(frusCentre.z / qStep) * qStep;
mat4 viewMat = glm::lookAt(frusCentre-direction, frusCentre, tangent);
mat4 projMat = glm::ortho(-csm.first, csm.first, -csm.first, csm.first, -csm.first, csm.first);
matArr[i] = projMat * viewMat;
}
}
This makes a difference, in that the shadows now swim slowly, rather than bouncing too fast to notice any pattern. However, I'm pretty sure that's just by chance, and that I'm not at all snapping to the right thing, or even snapping the right thing.

To fix this, I need to do the snapping in light space, not world space:
viewMat[3][0] -= glm::mod(viewMat[3][0], 2.f * split / texSize);
viewMat[3][1] -= glm::mod(viewMat[3][1], 2.f * split / texSize);
viewMat[3][2] -= glm::mod(viewMat[3][2], 2.f * split / texSize);
Old (Wrong) answer:
So, I revisited this today and managed to solve it in about 10 minutes :D
Just round frusCentre like this:
frusCentre -= glm::mod(frusCentre, 2.f * csm.first / 1024.f);
or, more generally:
frusCentre -= glm::mod(frusCentre, 2.f * split / texSize);
EDIT: No, that's no good... I'll keep trying.

Related

What is the best OpenGL self-shadowing technique?

I chose the quite old, but sufficient method of shadow mapping, which is OK overall, but I quickly discovered some self-shadowing problems:
It seems, this problem appears because of the bias offset, which is necessary to eliminate shadow acne artifacts.
After some googling, it seems that there is no easy solution to this, so I tried some shader tricks which worked, but not very well.
My first idea was to perform a calculation of a dot multiplication between a light direction vector and a normal vector. If the result is lower than 0, the angle between vectors is >90 degrees, so this surface is pointing outward at the light source, hence it is not illuminated. This works good, except shadows may appear too sharp and hard:
After I was not satisfied with the results, I tried another trick, by multiplying the shadow value by the abs value of the dot product of light direction and normal vector (based on the normal map), and it did work (hard shadows from the previous image got smooth transition from shadow to regular diffuse color), except it created another artifact in situations, when the normal map normal vector is pointing somewhat at the sun, but the face normal vector does not. It also made self-shadows much brighter (but it is fixable):
Can I do something about it, or should I just choose the lesser evil?
Shader shadows code for example 1:
vec4 fragPosViewSpace = view * vec4(FragPos, 1.0);
float depthValue = abs(fragPosViewSpace.z);
vec4 fragPosLightSpace = lightSpaceMatrix * vec4(FragPos, 1.0);
vec3 projCoords = fragPosLightSpace.xyz / fragPosLightSpace.w;
// transform to [0,1] range
projCoords = projCoords * 0.5 + 0.5;
// get depth of current fragment from light's perspective
float currentDepth = projCoords.z;
// keep the shadow at 0.0 when outside the far_plane region of the light's frustum.
if (currentDepth > 1.0)
{
return 0.0;
}
// calculate bias (based on depth map resolution and slope)
float bias = max(0.005 * (1.0 - dot(normal, lightDir)), 0.0005);
vec2 texelSize = 1.0 / vec2(textureSize(material.texture_shadow, 0));
const int sampleRadius = 2;
const float sampleRadiusCount = pow(sampleRadius * 2 + 1, 2);
for(int x = -sampleRadius; x <= sampleRadius; ++x)
{
for(int y = -sampleRadius; y <= sampleRadius; ++y)
{
float pcfDepth = texture(material.texture_shadow, vec3(projCoords.xy + vec2(x, y) * texelSize, layer)).r;
shadow += (currentDepth - bias) > pcfDepth ? ambientShadow : 0.0;
}
}
shadow /= sampleRadiusCount;
Hard self shadows trick code:
float shadow = 0.0f;
float ambientShadow = 0.9f;
// "Normal" is a face normal vector, "normal" is calculated based on normal map. I know there is a naming problem with that))
float faceNormalDot = dot(Normal, lightDir);
float vectorNormalDot = dot(normal, lightDir);
if (faceNormalDot <= 0 || vectorNormalDot <= 0)
{
shadow = max(abs(vectorNormalDot), ambientShadow);
}
else
{
vec4 fragPosViewSpace = view * vec4(FragPos, 1.0);
float depthValue = abs(fragPosViewSpace.z);
...
}
Dot product multiplication trick code:
float shadow = 0.0f;
float ambientShadow = 0.9f;
float faceNormalDot = dot(Normal, lightDir);
float vectorNormalDot = dot(normal, lightDir);
if (faceNormalDot <= 0 || vectorNormalDot <= 0)
{
shadow = ambientShadow * abs(vectorNormalDot);
}
else
{
vec4 fragPosViewSpace = view * vec4(FragPos, 1.0);
float depthValue = abs(fragPosViewSpace.z);
...

Adaptive depth bias for texture sampling

I have a complex 3D scenes, the values in my depth buffer ranges from close shot, several centimeters, to several kilometers.
For some various effects I am using a depth bias, offset to circumvent some artifacts (SSAO, Shadow). Even during depth peeling by comparing depth between the current peel and the previous one some issues can occur.
I have fix those issues for close up shot but when the fragment is far enough, the bias become obsolete.
I am wondering how to tackle the bias for such scenes. Something around bias depending on the current world depth of the current pixel or maybe completely disabling the effect at a given depth?
Is there some good practices regarding those issues, and how can I address them?
It seems I found a way,
I have sound this link for shadow bias
https://digitalrune.github.io/DigitalRune-Documentation/html/3f4d959e-9c98-4a97-8d85-7a73c26145d7.htm
Depth bias and normal offset values are specified in shadow map
texels. For example, depth bias = 3 means that the pixels is moved the
length of 3 shadows map texels closer to the light.
By keeping the bias proportional to the projected shadow map texels,
the same settings work at all distances.
I use the difference in world space between the current point and a neighboring pixel with the same depth component. the bias become something close to "the average distance between 2 neighboring pixels". The further the pixel is the larger the bias will be (from few millimeters close to the near plane to meters at the far plane).
So for each of my sampling point, I offset its position from some
pixels in its x direction (3 pixels give me good results on various
scenes).
I compute the world difference between the currentPoint and this new offsetedPoint
I use this difference as a bias for all my depth testing
code
float compute_depth_offset() {
mat4 inv_mvp = inverse(mvp);
vec2 currentPixel = vec2(gl_FragCoord.xy) / dim;
vec2 nextPixel = vec2(gl_FragCoord.xy + vec2(depth_transparency_bias, 0.0)) / dim;
vec4 currentNDC;
vec4 nextNDC;
currentNDC.xy = currentPixel * 2.0 - 1.0;
currentNDC.z = (2.0 * gl_FragCoord.z - depth_range.near - depth_range.far) / (depth_range.far - depth_range.near);
currentNDC.w = 1.0;
nextNDC.xy = nextPixel * 2.0 - 1.0;
nextNDC.z = currentNDC.z;
nextNDC.w = currentNDC.w;
vec4 world = (inv_mvp * currentNDC);
world.xyz = world.xyz / world.w;
vec4 nextWorld = (inv_mvp * nextNDC);
nextWorld.xyz = nextWorld.xyz / nextWorld.w;
return length(nextWorld.xyz - world.xyz);
}
recently I used only the world space derivative of the current pixels position:
float compute_depth_offset(float zNear, float zFar)
{
mat4 mvp = projection * modelView;
mat4 inv_mvp = inverse(mvp);
vec2 currentPixel = vec2(gl_FragCoord.xy) / dim;
vec4 currentNDC;
currentNDC.xy = currentPixel * 2.0 - 1.0;
currentNDC.z = (2.0 * gl_FragCoord.z - 0.0 - 1.0) / (1.0 - 0.0);
currentNDC.w = 1.0;
vec4 world = (inv_mvp * currentNDC);
world.xyz = world.xyz / world.w;
vec3 depth = max(abs(dFdx(world.xyz)), abs(dFdy(world.xyz)));
return depth.x + depth.y + depth.z;
}

glsl strange behavior with frag shader and player movement

im creating a 2d top down game in sfml where i would like the player to only be able to see things in their fov of 45 deg, currently my fragment shader looks like follows
uniform sampler2D texture;
uniform vec2 pos;
uniform vec2 screenSize;
uniform float in_angle;
void main()
{
vec2 fc = gl_FragCoord.xy/screenSize;
vec2 ndcCoords = vec2(0.0);
float fov = radians(45);
ndcCoords = (pos + (screenSize/2))/screenSize;
ndcCoords.y = abs(ndcCoords.y - 1.0);
float angle = radians(-angle+90+45);
float coT;
float siT;
vec2 adj = vec2(0.0);
coT = cos(angle);
siT = sin(angle);
adj.x = coT * ndcCoords.x - siT * ndcCoords.y;
adj.y = siT * ndcCoords.x + coT * ndcCoords.y;
vec2 diff = normalize(ndcCoords - fc);
float dist = acos(dot(diff, normalize(adj)));
vec3 color = vec3(0.0f);
if(dist < fov/2)
{
color = vec3(1.0);
}
gl_FragColor = vec4(color, 1.0) * texture2D(texture, gl_TexCoord[0].xy);
}
what this is doing is adjusting the playerPos vec2 and rotating it, so i can determine what fragcoords are within the players fov, however, when i move down without moving the mouse from directly above the player the fov shifts to the left / right without the player rotating at all, i've tried every solution i can think of but i can't seem to stop it, nor can i find a solution to this online. any suggestions would be appreciated
a solution has arisen, instead of trying to rotate the object to get its vector2 normalised direction, a simpler method is to calculate the angle between a frag and the playerPosition followed by creating a difference by subtracting the player rotation in radians as shown below. this can then be adjusted for the coordinate space and compared to the players fov
void main()
{
float fov = radians(45);
float pi = radians(180);
vec3 color = vec3(0.2);
vec2 st = gl_FragCoord.xy/screenSize.xy;
pos = (pos + (screenSize/2)) / screenSize;
float angleToObj = atan2(st.y - pos.y, st.x - pos.x);
float angleDiff = angleToObj - radians(-angle);
if(angleDiff > pi)
angleDiff -= 2.0f * pi;
if(angleDiff < -pi)
angleDiff += 2.0f * pi;
if(abs(angleDiff) < fov/2)
color = vec3(1.0f);
gl_FragColor = vec4(color, 1.0) * texture2D(texture, gl_TexCoord[0].xy);
}

How to generate camera rays for ray casting

I am trying to make a simple voxel engine with OpenGL and C++. My first step is to send out rays from the camera and detect if the ray intersected with something (for testing purposes its just two planes). I have got it working with without the camera rotating by creating a full screen quad and programming the fragment shader to send out a ray for every fragment (for now I'm just assuming a fragment is a pixel) which is in the direction texCoord.x, texCoord.y, -1. Now I am trying to implement camera rotation.
I have tried to generate a rotation matrix within the cpu and send that to the shader which will multiply it with every ray. However, when I rotate the camera, the planes start to stretch in a way which I can only describe with this video.
https://www.youtube.com/watch?v=6NScMwnPe8c
Here is the code that creates the matrix and is run every frame:
float pi = 3.141592;
// camRotX and Y are defined elsewhere and can be controlled from the keyboard during runtime.
glm::vec3 camEulerAngles = glm::vec3(camRotX, camRotY, 0);
std::cout << "X: " << camEulerAngles.x << " Y: " << camEulerAngles.y << "\n";
// Convert to radians
camEulerAngles.x = camEulerAngles.x * pi / 180;
camEulerAngles.y = camEulerAngles.y * pi / 180;
camEulerAngles.z = camEulerAngles.z * pi / 180;
// Generate Quaternian
glm::quat camRotation;
camRotation = glm::quat(camEulerAngles);
// Generate rotation matrix from quaternian
glm::mat4 camToWorldMatrix = glm::toMat4(camRotation);
// No transformation matrix is created because the rays should be relative to 0,0,0
// Send the rotation matrix to the shader
int camTransformMatrixID = glGetUniformLocation(shader, "cameraTransformationMatrix");
glUniformMatrix4fv(camTransformMatrixID, 1, GL_FALSE, glm::value_ptr(camToWorldMatrix));
And the fragment shader:
#version 330 core
in vec4 texCoord;
layout(location = 0) out vec4 color;
uniform vec3 cameraPosition;
uniform vec3 cameraTR;
uniform vec3 cameraTL;
uniform vec3 cameraBR;
uniform vec3 cameraBL;
uniform mat4 cameraTransformationMatrix;
uniform float fov;
uniform float aspectRatio;
float pi = 3.141592;
int RayHitCell(vec3 origin, vec3 direction, vec3 cellPosition, float cellSize)
{
if(direction.z != 0)
{
float multiplicationFactorFront = cellPosition.z - origin.z;
if(multiplicationFactorFront > 0){
vec2 interceptFront = vec2(direction.x * multiplicationFactorFront + origin.x,
direction.y * multiplicationFactorFront + origin.y);
if(interceptFront.x > cellPosition.x && interceptFront.x < cellPosition.x + cellSize &&
interceptFront.y > cellPosition.y && interceptFront.y < cellPosition.y + cellSize)
{
return 1;
}
}
float multiplicationFactorBack = cellPosition.z + cellSize - origin.z;
if(multiplicationFactorBack > 0){
vec2 interceptBack = vec2(direction.x * multiplicationFactorBack + origin.x,
direction.y * multiplicationFactorBack + origin.y);
if(interceptBack.x > cellPosition.x && interceptBack.x < cellPosition.x + cellSize &&
interceptBack.y > cellPosition.y && interceptBack.y < cellPosition.y + cellSize)
{
return 2;
}
}
}
return 0;
}
void main()
{
// For now I'm not accounting for FOV and aspect ratio because I want to get the rotation working first
vec4 beforeRotateRayDirection = vec4(texCoord.x,texCoord.y,-1,0);
// Apply the rotation matrix that was generated on the cpu
vec3 rayDirection = vec3(cameraTransformationMatrix * beforeRotateRayDirection);
int t = RayHitCell(cameraPosition, rayDirection, vec3(0,0,5), 1);
if(t == 1)
{
// Hit front plane
color = vec4(0, 0, 1, 0);
}else if(t == 2)
{
// Hit back plane
color = vec4(0, 0, 0.5, 0);
}else{
// background color
color = vec4(0, 1, 0, 0);
}
}
Okay. Its really hard to know what is wrong, I will try non-theless.
Here are few tips and notes:
1) You can debug directions by mapping them to RGB color. Keep in mind you should normalize the vectors and map from (-1,1) to (0,1). Just do the dir*0.5+1.0 type of thing. Example:
color = vec4(normalize(rayDirection) * 0.5, 0) + vec4(1);
2) You can get the rotation matrix in a more straight manner. Quaternion is initialized from an forward direction, it will first rotate around Y axis (horizontal look) then, and only then, around X axis (vertical look). Keep in mind that the rotations order is implementation dependent if you initialize from euler-angles. Use mat4_cast to avoid experimental glm extension (gtx) whenever possible. Example:
// Define rotation quaternion starting from look rotation
glm::quat camRotation = glm::vec3(0, 0, 0);
camRotation = glm::rotate(camRotation, glm::radians(camRotY), glm::vec3(0, 1, 0));
camRotation = glm::rotate(camRotation, glm::radians(camRotX), glm::vec3(1, 0, 0));
glm::mat4 camToWorldMatrix = glm::mat4_cast(camRotation);
3) Your beforeRotateRayDirection is a vector that (probably) points from (-1,-1,-1) all the way to (1,1,-1). Which is not normalized, the length of (1,1,1) is √3 ≈ 1.7320508075688772... Be sure you have taken that into account for your collision math or just normalize the vector.
My partial answer so far...
Your collision test is a bit weird... It appears you want to cast the ray into the Z plane for the given cell position (but twice, one for the front and one for the back). I have reviewed your code logic and it makes some sense, but without the vertex program, thus not knowing what the texCoord range values are, it is not possible to be sure. You might want to rethink your logic to something like this:
int RayHitCell(vec3 origin, vec3 direction, vec3 cellPosition, float cellSize)
{
//Get triangle side vectors
vec3 tu = vec3(cellSize,0,0); //Triangle U component
vec3 tv = vec3(0,cellSize,0); //Triangle V component
//Determinant for inverse matrix
vec3 q = cross(direction, tv);
float det = dot(tu, q);
//if(abs(det) < 0.0000001) //If too close to zero
// return;
float invdet = 1.0/det;
//Solve component parameters
vec3 s = origin - cellPosition;
float u = dot(s, q) * invdet;
if(u < 0.0 || u > 1.0)
return 0;
vec3 r = cross(s, tu);
float v = dot(direction, r) * invdet;
if(v < 0.0 || v > 1.0)
return 0;
float t = dot(tv, r) * invdet;
if(t <= 0.0)
return 0;
return 1;
}
void main()
{
// For now I'm not accounting for FOV and aspect ratio because I want to get the
// rotation working first
vec4 beforeRotateRayDirection = vec4(texCoord.x, texCoord.y, -1, 0);
// Apply the rotation matrix that was generated on the cpu
vec3 rayDirection = vec3(cameraTransformationMatrix * beforeRotateRayDirection);
int t = RayHitCell(cameraPosition, normalize(rayDirection), vec3(0,0,5), 1);
if (t == 1)
{
// Hit front plane
color = vec4(0, 0, 1, 0);
}
else
{
// background color
color = vec4(0, 1, 0, 0);
}
}
This should give you a plane, let me know if it works. A cube will be very easy to do.
PS.: u and v can be used for texture mapping.

How to get a smooth result with RSM (Reflective Shadow Mapping)?

I'm trying to implement a Reflective Shadow Mapping program with Vulkan.
The problem is that a get bad result :
As you can see the result is not smooth.
Here I am rendering in a first pass the position, normal and flux from the light position in 3 textures with a resolution of 512 * 512.
In a second pass, I compute the indirect illumination from the first pass textures according to this paper (http://www.klayge.org/material/3_12/GI/rsm.pdf) :
for(int i = 0; i < 151; i++)
{
vec4 rsmProjCoords = projCoords + vec4(rsmDiskSampling[i] * 0.09, 0.0, 0.0);
vec3 indirectLightPos = texture(rsmPosition, rsmProjCoords.xy).rgb;
vec3 indirectLightNorm = texture(rsmNormal, rsmProjCoords.xy).rgb;
vec3 indirectLightFlux = texture(rsmFlux, rsmProjCoords.xy).rgb;
vec3 r = worldPos - indirectLightPos;
float distP2 = dot( r, r );
vec3 emission = indirectLightFlux * (max(0.0, dot(indirectLightNorm, r)) * max(0.0, dot(N, -r)));
emission *= rsmDiskSampling[i].x * rsmDiskSampling[i].x / (distP2 * distP2);
indirectRSM += emission;
}
The problem is fixed.
The main problem was the sampling, I was using a linear sampling instead of a nearest sampling :
samplerInfo.magFilter = VK_FILTER_NEAREST;
samplerInfo.minFilter = VK_FILTER_NEAREST;
Other problems were the number of VPL used and the distance between them.