This is a vertex shader I'm currently working with:
attribute vec3 v_pos;
attribute vec4 v_color;
attribute vec2 v_uv;
attribute vec3 v_rotation; // [angle, x, y]
uniform mat4 modelview_mat;
uniform mat4 projection_mat;
varying vec4 frag_color;
varying vec2 uv_vec;
varying mat4 v_rotationMatrix;
void main (void) {
float cos = cos(v_rotation[0]);
float sin = sin(v_rotation[0]);
mat2 trans_rotate = mat2(
cos, -sin,
sin, cos
);
vec2 rotated = trans_rotate * vec2(v_pos[0] - v_rotation[1], v_pos[1] - v_rotation[2]);
gl_Position = projection_mat * modelview_mat * vec4(rotated[0] + v_rotation[1], rotated[1] + v_rotation[2], 1.0, 1.0);
gl_Position[2] = 1.0 - v_pos[2] / 100.0; // Arbitrary maximum depth for this shader.
frag_color = vec4(gl_Position[2], 0.0, 1.0, 1.0); // <----------- !!
uv_vec = v_uv;
}
and the fragment:
varying vec4 frag_color;
varying vec2 uv_vec;
uniform sampler2D tex;
void main (void){
vec4 color = texture2D(tex, uv_vec) * frag_color;
gl_FragColor = color;
}
Notice how I'm manually setting the Z index of the gl_Position variable to be a bound value in the range 0.0 -> 1.0 (the upper bound is done in code; safe to say, not vertex has a z value < 0 or > 100).
It works... mostly. The problem is that when I render it, I get this:
That's not the correct depth sorting for these elements, which have z value respectively, of 15, 50 and 80, as you can see from the red value for each sprite.
The correct order to render in would be blue -> top, purple middle, and pink bottom; but instead these sprites are being rendered in render order.
ie. They are being drawn via:
glDrawArrays() <--- Pink, first geometry batch
glDrawArrays() <--- Blue, second geometry batch
glDrawArrays() <--- Purple, thirds geometry batch
What's going on?
Surely it irrelevant how many times I call gl draw functions before flushing; the depth testing should sort this all out right?
Do you have to manually invoke depth testing inside the fragment shader somehow?
You say you're normalizing the output Z value into the range: 0.0 - 1.0?
It should really be the range: -W - +W. Given an orthographic projection, this means the clip-space Z should range from: -1.0 - +1.0. You are only using half of your depth range, which reduces the resolving capability of the depth buffer significantly.
To make matters worse (and I am pretty sure this is where your actual problem comes from) it looks like you are inverting your depth buffer by giving the farthest points a value of 0.0 and the nearest points 1.0. In actuality, -1.0 corresponds to the near plane and 1.0 corresponds to the far plane in OpenGL.
gl_Position[2] = 1.0 - v_pos[2] / 100.0;
~~~~~~~~~~~~~~
// This needs to be changed, your depth is completely backwards.
gl_Position.z = 2.0 * (v_pos.z / 100.0) - 1.0;
~~~~~~~~~~~~~
// This should fix both the direction, and use the full depth range.
However, it is worth mentioning that now the value of gl_Position[2] or gl_Position.z ranges from -1.0 to 1.0 which means it cannot be used as a visible color without some scaling and biasing:
frag_color = vec4 (gl_Position.z * 0.5 + 0.5, 0.0, 1.0, 1.0); // <----------- !!
On a final note, I have been discussing Normalized Device Coordinates this entire time, not window coordinates. In window coordinates the default depth range is 0.0 = near, 1.0 = far; this may have been the source of some confusion. Understand that window coordinates (gl_FragCoord) are not pertinent to the calculations in the vertex shader.
You can use this in your fragment shader to test if your depth range is setup correctly:
vec4 color = texture2D(tex, uv_vec) * vec4 (gl_FragCoord.z, frag_color.yzw);
It should produce the same results as:
vec4 color = texture2D(tex, uv_vec) * frag_color;
Related
I want to move from basic shadow mapping on to adaptive biased shadow mapping.
I found a paper which describes how to do it, but I am not sure how to achieve a certain step in the process:
The idea is to have a plane P (which is basically just the normal of the current fragment's surface in the fragment shader stage) and the world space position of the fragment (F1 in the picture above).
In order to calculate the correct bias (to fight shadow acne) I need to find the world space position of F2 which I can get if I shoot a ray from the light source through the center of the shadow map's texel center. This ray then eventually hits the plane P which results in the needed point F2.
With F1 and F2 now known, I then can calculate the distance between F1 and F2 along the light ray (I guess) and thus get the ideal bias to fight shadow acne.
Right now my basic shader code looks like this:
Vertex shader:
in vec3 aLocalObjectPos;
out vec4 vShadowCoord;
out vec3 vF1;
// to shift the coordinates from [-1;1] to [0;1]
const mat4 biasMatrix = mat4(
0.5, 0.0, 0.0, 0.0,
0.0, 0.5, 0.0, 0.0,
0.0, 0.0, 0.5, 0.0,
0.5, 0.5, 0.5, 1.0
);
int main()
{
// get the vertex position in the light's view space:
vShadowCoord = (biasMatrix * viewProjShadowMap * modelMatrix) * vec4(aLocalObjectPos, 1.0);
vF1 = (modelMatrix * vec4(aLocalObjectPos, 1.0)).xyz;
}
Helper method in fragment shader:
uniform sampler2DShadow uTextureShadowMap;
float calculateShadow(float bias)
{
vShadowCoord.z -= bias;
return textureProjOffset(uTextureShadowMap, vShadowCoord, ivec2(0, 0));
}
My problem now is:
How do I get the light ray that goes from the light source through the shadow map's texel center?
I already found this topic: Adaptive Depth Bias for Shadow Maps Ray Casting
Unfortunately there is no answer and I don't quite get all the things the author is talking about :-/
So, I think I have figured it out myself. I followed the directions in this paper:
http://cwyman.org/papers/i3d14_adaptiveBias.pdf
Vertex Shader (not much going on there):
const mat4 biasMatrix = mat4(
0.5, 0.0, 0.0, 0.0,
0.0, 0.5, 0.0, 0.0,
0.0, 0.0, 0.5, 0.0,
0.5, 0.5, 0.5, 1.0
);
in vec4 aPosition; // vertex in model's local space (not modified in any way)
uniform mat4 uVPShadowMap; // light's view-projection matrix
out vec4 vShadowCoord;
void main()
{
// ...
vShadowCoord = (biasMatrix * uVPShadowMap * uModelMatrix) * aPosition;
// ...
}
Fragment Shader:
#version 450
in vec3 vFragmentWorldSpace; // fragment position in World space
in vec4 vShadowCoord; // texture coordinates for shadow map lookup (see vertex shader)
uniform sampler2DShadow uTextureShadowMap;
uniform vec4 uLightPosition; // Light's position in world space
uniform vec2 uLightNearFar; // Light's zNear and zFar values
uniform float uK; // variable offset faktor to tweak the computed bias a little bit
uniform mat4 uVPShadowMap; // light's view-projection matrix
const vec4 corners[2] = vec4[]( // frustum diagonal points in light's view space normalized [-1;+1]
vec4(-1.0, -1.0, -1.0, 1.0), // left bottom near
vec4( 1.0, 1.0, 1.0, 1.0) // right top far
);
float calculateShadowIntensity(vec3 fragmentNormal)
{
// get fragment's position in light space:
vec4 fragmentLightSpace = uVPShadowMap * vec4(vFragmentWorldSpace, 1.0);
vec3 fragmentLightSpaceNormalized = fragmentLightSpace.xyz / fragmentLightSpace.w; // range [-1;+1]
vec3 fragmentLightSpaceNormalizedUV = fragmentLightSpaceNormalized * 0.5 + vec3(0.5, 0.5, 0.5); // range [ 0; 1]
// get shadow map's texture size:
ivec2 textureDimensions = textureSize(uTextureShadowMap, 0);
vec2 delta = vec2(textureDimensions.x, textureDimensions.y);
// get width of every texel:
vec2 textureStep = vec2(1.0 / textureDimensions.x, 1.0 / textureDimensions.y);
// get the UV coordinates of the texel center:
vec2 fragmentLightSpaceUVScaled = fragmentLightSpaceNormalizedUV.xy * delta;
vec2 texelCenterUV = floor(fragmentLightSpaceUVScaled) * textureStep + textureStep / 2;
// convert range for texel center in light space in range [-1;+1]:
vec2 texelCenterLightSpaceNormalized = 2.0 * texelCenterUV - vec2(1.0, 1.0);
// recreate light ray in world space:
vec4 recreatedVec4 = vec4(texelCenterLightSpaceNormalized.x, texelCenterLightSpaceNormalized.y, -uLightsNearFar.x, 1.0);
mat4 vpShadowMapInversed = inverse(uVPShadowMap);
vec4 texelCenterWorldSpace = vpShadowMapInversed * recreatedVec4;
vec3 lightRayNormalized = normalize(texelCenterWorldSpace.xyz - uLightsPositions.xyz);
// compute scene scale for epsilon computation:
vec4 frustum1 = vpShadowMapInversed * corners[0];
frustum1 = frustum1 / frustum1.w;
vec4 frustum2 = vpShadowMapInversed * corners[1];
frustum2 = frustum2 / frustum2.w;
float ln = uLightNearFar.x;
float lf = uLightNearFar.y;
// compute light ray intersection with fragment plane:
float dotLightRayfragmentNormal = dot(fragmentNormal, lightRayNormalized);
float d = dot(fragmentNormal, vFragmentWorldSpace);
float x = (d - dot(fragmentNormal, uLightsPositions.xyz)) / dot(fragmentNormal, lightRayNormalized);
vec4 intersectionWorldSpace = vec4(uLightsPositions.xyz + lightRayNormalized * x, 1.0);
// compute bias:
vec4 texelInLightSpace = uVPShadowMap * intersectionWorldSpace;
float intersectionDepthTexelCenterUV = (texelInLightSpace.z / texelInLightSpace.w) / 2.0 + 0.5;
float fragmentDepthLightSpaceUV = fragmentLightSpaceNormalizedUV.z;
float bias = intersectionDepthTexelCenterUV - fragmentDepthLightSpaceUV;
float depthCompressionResult = pow(lf - fragmentLightSpaceNormalizedUV.z * (lf - ln), 2.0) / (lf * ln * (lf - ln));
float epsilon = depthCompressionResult * length(frustum1.xyz - frustum2.xyz) * uK;
bias += epsilon;
vec4 shadowCoord = vShadowCoord;
shadowCoord.z -= bias;
float shadowValue = textureProj(uTextureShadowMap, shadowCoord);
return max(shadowValue, 0.0);
}
Please note that this is a very verbose method (you could optimise several steps, I know) to better explain what I did to make it work.
All my shadow casting lights use perspective projection.
I tested the results on the CPU side in a separate project (only c# with the math structs from the OpenTK package) and they seem reasonable. I used several light positions, texture sizes, etc. The bias values looked ok in all my tests. Of course, this is no proof, but I have a good feeling about this.
In the end:
The benefits were very small. The visual results are good (especially for shadow maps with >= 2048 samples per dimension) but I still had to tweak the offset value (uniform float uK in the fragment shader) for each of my scenes. I found values from 0.01 to 0.03 to deliver useable results.
I lost about 10% performance (fps-wise) compared to my previous approach (slope-scaled bias) and gained maybe 1% of visual fidelity when it comes to shadows (acne, peter panning). The 1% is not measured - only felt by me :-)
I wanted this approach to be the "one-solution-to-all-problems". But I guess, there is no "fire-and-forget" solution when it comes to shadow mapping ;-/
I am currently trying to draw a 2D grid on a single quad using only shaders. I am using SFML as the graphics library and sf::View to control the camera. So far I have been able to draw an anti-aliased multi level grid. The first level (blue) outlines a chunk and the second level (grey) outlines the tiles within a chunk.
I would now like to fade grid levels based on the distance from the camera. For example, the chunk grid should fade in as the camera zooms in. The same should be done for the tile grid after the chunk grid has been completely faded in.
I am not sure how this could be implemented as I am still new to OpenGL and GLSL. If anybody has any pointers on how this functionality can be implemented, please let me know.
Vertex Shader
#version 130
out vec2 texCoords;
void main() {
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
texCoords = (gl_TextureMatrix[0] * gl_MultiTexCoord0).xy;
}
Fragment Shader
#version 130
uniform vec2 chunkSize = vec2(64.0, 64.0);
uniform vec2 tileSize = vec2(16.0, 16.0);
uniform vec3 chunkBorderColor = vec3(0.0, 0.0, 1.0);
uniform vec3 tileBorderColor = vec3(0.5, 0.5, 0.5);
uniform bool drawGrid = true;
in vec2 texCoords;
void main() {
vec2 uv = texCoords.xy * chunkSize;
vec3 color = vec3(1.0, 1.0, 1.0);
if(drawGrid) {
float aa = length(fwidth(uv));
vec2 halfChunkSize = chunkSize / 2.0;
vec2 halfTileSize = tileSize / 2.0;
vec2 a = abs(mod(uv - halfChunkSize, chunkSize) - halfChunkSize);
vec2 b = abs(mod(uv - halfTileSize, tileSize) - halfTileSize);
color = mix(
color,
tileBorderColor,
smoothstep(aa, .0, min(b.x, b.y))
);
color = mix(
color,
chunkBorderColor,
smoothstep(aa, .0, min(a.x, a.y))
);
}
gl_FragColor.rgb = color;
gl_FragColor.a = 1.0;
}
You need to split your multiplication in the vertex shader to two parts:
// have a variable to be interpolated per fragment
out vec2 vertex_coordinate;
...
{
// this will store the coordinates of the vertex
// before its projected (i.e. its "world" coordinates)
vertex_coordinate = gl_ModelViewMatrix * gl_Vertex;
// get your projected vertex position as before
gl_Position = gl_ProjectionMatrix * vertex_coordinate;
...
}
Then in the fragment shader you change the color based on the world vertex coordinate and the camera position:
in vec2 vertex_coordinate;
// have to update this value, every time your camera changes its position
uniform vec2 camera_world_position = vec2(64.0, 64.0);
...
{
...
// calculate the distance from the fragment in world coordinates to the camera
float fade_factor = length(camera_world_position - vertex_coordinate);
// make it to be 1 near the camera and 0 if its more then 100 units.
fade_factor = clamp(1.0 - fade_factor / 100.0, 0.0, 1.0);
// update your final color with this factor
gl_FragColor.rgb = color * fade_factor;
...
}
The second way to do it is to use the projected coordinate's w. I personally prefer to calculate the distance in units of space. I did not test this code, it might have some trivial syntax errors, but if you understand the idea, you can apply it in any other way.
I am currently trying to apply a normal map in my shader but the shading in the final image is way off.
Surfaces that should be shaded are completely bright, surfaces that should be bright are completely shaded and the top surface, which should have the same shade regardless of rotation of the y-axis, is alternating between bright and dark.
After some trial and error i found out that i can get the correct shading by changing this
vec3 normal_viewspace = normal_matrix * normalize((normal_color.xyz * 2.0) - 1.0);
to this
vec3 normal_viewspace = normal_matrix * normalize(vec3(0.0, 0.0, 1.0));
Diffuse and specular lighting are now working correctly,
but obviously without the normal map applied. I honestly have no idea where exactly the error is originating. I am quite new to shader programming and was following this tutorial. Below are the shader sources, with all irrelevant parts cut.
Vertex shader:
#version 450
layout(location = 0) in vec3 position;
layout(location = 1) in vec3 normal;
layout(location = 2) in vec3 tangent;
layout(location = 3) in vec3 bitangent;
layout(location = 4) in vec2 texture_coordinates;
layout(location = 0) out mat3 normal_matrix;
layout(location = 3) out vec2 texture_coordinates_out;
layout(location = 4) out vec4 vertex_position_viewspace;
layout(set = 0, binding = 0) uniform Matrices {
mat4 world;
mat4 view;
mat4 projection;
} uniforms;
void main() {
mat4 worldview = uniforms.view * uniforms.world;
normal_matrix = mat3(worldview) * mat3(normalize(tangent), normalize(bitangent), normalize(normal));
vec4 vertex_position_worldspace = uniforms.world * vec4(position, 1.0);
vertex_position_viewspace = uniforms.view * vertex_position_worldspace;
gl_Position = uniforms.projection * vertex_position_viewspace;
texture_coordinates_out = texture_coordinates;
}
Fragment shader:
#version 450
layout(location = 0) in mat3 normal_matrix;
layout(location = 3) in vec2 texture_coordinates;
layout(location = 4) in vec4 vertex_position_viewspace;
layout(location = 0) out vec4 fragment_color;
layout(set = 0, binding = 0) uniform Matrices {
mat4 world;
mat4 view;
mat4 projection;
} uniforms;
// ...
layout (set = 0, binding = 2) uniform sampler2D normal_map;
// ...
const vec4 LIGHT = vec4(1.25, 3.0, 3.0, 1.0);
void main() {
// ...
vec4 normal_color = texture(normal_map, texture_coordinates);
// ...
vec3 normal_viewspace = normal_matrix * normalize((normal_color.xyz * 2.0) - 1.0);
vec4 light_position_viewspace = uniforms.view * LIGHT;
vec3 light_direction_viewspace = normalize((light_position_viewspace - vertex_position_viewspace).xyz);
vec3 view_direction_viewspace = normalize(vertex_position_viewspace.xyz);
vec3 light_color_intensity = vec3(1.0, 1.0, 1.0) * 7.0;
float distance_from_light = distance(vertex_position_viewspace, light_position_viewspace);
float diffuse_strength = clamp(dot(normal_viewspace, light_direction_viewspace), 0.0, 1.0);
vec3 diffuse_light = (light_color_intensity * diffuse_strength) / (distance_from_light * distance_from_light);
// ...
fragment_color.rgb = (diffuse_color.rgb * diffuse_light);
fragment_color.a = diffuse_color.a;
}
There are some things i am a bit uncertain about. For example i noticed that in the tutorial, the light is called lightPosition_worldSpace, making me think i need to multiply the light by the world matrix first, but doing so only makes my light rotate with the cube and still doesn't fix my lighting issue.
Any help or ideas on what i could be doing wrong would be greatly appreciated.
I'm the one who created the tutorial site you're referencing.
If possible could you share a link to your normal map as well? When you say that when you changed the line where the normal of the fragment is calculated using the normal map from this
vec3 normal_viewspace = normal_matrix * normalize((normal_color.xyz * 2.0) - 1.0);
to one where you hardcode a value like this
vec3 normal_viewspace = normal_matrix * normalize(vec3(0.0, 0.0, 1.0));
and that fixes the rendering issue, it seems to indicate an issue with the normal map itself.
One way to verify this is to set your entire normal map image to the RGB value (128, 128, 255), which is exactly the same as the vec3(0.0, 0.0, 1.0) value you were using in your changed line. If this results in the object rendering correctly the same as when you were using a hardcoded value, that means you were using a bad normal map.
The normal map is just a texture/image that stores the directions of the normals of your object in "tangent-space" (think of it as like if you had to flatten out your entire object into a 2D surface, and then the normals for each point of that surface is plotted on the map). For each pixel, the red channel represents the X-axis, the green channel represents the Y-axis, and the blue channel represents the Z-axis.
With colors, the range of colors in a normal map goes from (0, 0, 128) to (255, 255, 255) (for images where each color channel uses 8 bits/1 byte), but in GLSL this would be a range from (0.0, 0.0, 0.5) to (1.0, 1.0, 1.0). Let's just work with the range that is used in GLSL for the sake of simplicity.
When looking at the actual possible values for normals, their range actually is (-1.0, -1.0, 0.0) to (1.0, 1.0, 1.0) because you can have a normal direction be either forwards or backwards in either the X-axis or Y-axis.
So when we have a color value of (0.0, 0.0, 0.5), we're actually talking about a normal direction vector (-1.0, -1.0, 0.0). Similarly, a color value of (0.5, 0.5, 0.5) means the normal direction vector (0.0, 0.0, 0.0), and a color value of (1.0, 1.0, 1.0) means a normal value of (1.0, 1.0, 1.0).
So the goal now becomes transforming the value from the normal map from the color value range ((0.0, 0.0, 0.5) to (1.0, 1.0, 1.0)) to the actual range for normals ((-1.0, -1.0, 0.0) to (1.0, 1.0, 1.0)).
If you multiply a value from a normal map by 2.0, you change the possible range of the value from (0.0, 0.0, 0.5) - (1.0, 1.0, 1.0) to (0.0, 0.0, 1.0) - (2.0, 2.0, 2.0). And then if you subtract 1.0 from the result, the range now changes from (0.0, 0.0, 1.0) - (2.0, 2.0, 2.0) to (-1.0, -1.0, 0.0) - (1.0, 1.0, 1.0), which is exactly the possible range of the normals of an object.
So you have to make sure that when you're creating your normal map, the range of the RGB color values is between (0, 0, 128) - (255, 255, 255).
Side note: As for why the range of the blue channel (Z-axis) in the normal map can only be between 128 to 255, a value less than 128 means that a negative value on the Z-axis, meaning that the normal of the fragment is pointing into the surface, not out of it. Since a normal map is supposed to represent the values of the normals when the surface of the object is flattened and facing towards you, having a normal with a negative Z-axis value would mean that at that point the surface is actually facing away from you, which doesn't really make sense, hence why negative values are not allowed.
You could still try having the blue channel be a value less than 128 and see what interesting results pop out.
Also with regards to the doubt you mentioned in the end and in the comments:
What does lightPosition_worldSpace mean?
lightPosition_worldSpace represents the coordinate at which light is present relative to the center of the world (relative to the entire world you're rendering), hence the world-space suffix. You just need to multiply this position with the your view-matrix if you wish to know the position of the light is view-space (relative to your camera).
If you have a coordinate that is relative to the center of the object you're rendering, then you should multiply it with your model matrix (uniforms.world) to transform that coordinate from one that's relative to the center of your model to one that's relative to the center of the world. Since the lightPosition_worldSpace is the position of the light already relative to the center of the world, you don't need to multiply them. This is why you saw the behavior of the light moving with the cube when you did try to do so (the light was moved since its coordinates were thought to be placed relative to the cube itself).
Your comment regarding confusion with the line vec3 view_direction_viewspace = normalize(vertex_position_viewspace.xyz - vec3(0.0, 0.0, 0.0));
This is bad on my part for not representing what vec3(0.0, 0.0, 0.0) is with a variable. This is supposed to represent the position of the camera in view-space. Since in view-space the camera is at the center, its coordinate is vec3(0.0, 0.0, 0.0).
As for why I'm doing
vec3 view_direction_viewspace = normalize(vertex_position_viewspace.xyz - vec3(0.0, 0.0, 0.0));
when
vec3 view_direction_viewspace = normalize(vertex_position_viewspace.xyz);
is simpler and is basically the same thing, I had written it so to make it more obvious what was happening (which it appears I failed to do).
Typically, when you have two coordinates and you want to find the direction from a source coordinate to a destination coordinate you subtract the two coordinates to get their direction + magnitude. By normalizing that difference, you then just the directional component, with the magnitude part removed. So the equation for finding a direction from a source coordinate to a destination coordinate becomes:
direction = normalize(destination coordinate - source coordinate)
view_direction_viewspace Is supposed to represent the direction from the camera towards the fragment. To calculate this, we can just subtract the position of the camera (vec3(0.0, 0.0, 0.0)) from the position of the fragment (vertex_position_viewspace.xyz) and then run normalize(...) on the difference to get that result.
I've generally tried to maintain this consistency where when I'm calculating a direction using two coordinates I always have a destination and source coordinate explicitly written out, hence why you see the line vec3 view_direction_viewspace = normalize(vertex_position_viewspace.xyz - vec3(0.0, 0.0, 0.0)); in the fragment shader code.
I've updated the code by setting vec3(0.0, 0.0, 0.0) to a variable cameraPosition_viewSpace and using that to better clarify this intention.
Feel free to reach out through GitHub issues if you want to ask anything else or help improve the tutorial.
I haven't updated this post in a while because i have completely shifted away from using normal mapping (for now) but still wanted to post an answer, in case that someone else runs into the same problem. I still can't be 100% sure but i am fairly certain, that this behavior was caused by the library i was using to load the normal map. Special thanks to sabarnac who has been a huge help to me in solving this.
I'm implementing directional shadow mapping in deferred shading.
First, I render a depth map from light view (orthogonal projection).
Result:
I intend to do VSM so above buffer is R32G32 storing depth and depth * depth.
Then for a full-screen shading pass for shadow (after a lighting pass), I write the following pixel shader:
#version 330
in vec2 texCoord; // screen coordinate
out vec3 fragColor; // output color on the screen
uniform mat4 lightViewProjMat; // lightView * lightProjection (ortho)
uniform sampler2D sceneTexture; // lit scene with one directional light
uniform sampler2D shadowMapTexture;
uniform sampler2D scenePosTexture; // store fragment's 3D position
void main() {
vec3 fragPos = texture(scenePosTexture, texCoord).xyz; // get 3D position of pixel
vec4 fragPosLightSpace = lightViewProjMat * vec4(fragPos, 1.0); // project it to light-space view: lightView * lightProjection
// projective texture mapping
vec3 coord = fragPosLightSpace.xyz / fragPosLightSpace.w;
coord = coord * 0.5 + 0.5;
float lightViewDepth; // depth value in the depth buffer - the maximum depth that light can see
float currentDepth; // depth of screen pixel, maybe not visible to the light, that's how shadow mapping works
vec2 moments; // depth and depth * depth for later variance shadow mapping
moments = texture(shadowMapTexture, coord.xy).xy;
lightViewDepth = moments.x;
currentDepth = fragPosLightSpace.z;
float lit_factor = 0;
if (currentDepth <= lightViewDepth)
lit_factor = 1; // pixel is visible to the light
else
lit_factor = 0; // the light doesn't see this pixel
// I don't do VSM yet, just want to see black or full-color pixels
fragColor = texture(sceneTexture, texCoord).rgb * lit_factor;
}
The rendered result is a black screen, but if I hard coded the lit_factor to be 1, result is:
Basically that's how the sceneTexture looks like.
So I think either my depth value is wrong, which is unlikely, or my projection (light space projection in above shader / projective texture mapping) is wrong. Could you validate it for me?
My shadow map generation code is:
// vertex shader
#version 330 compatibility
uniform mat4 lightViewMat; // lightView
uniform mat4 lightViewProjMat; // lightView * lightProj
in vec3 in_vertex;
out float depth;
void main() {
vec4 vert = vec4(in_vertex, 1.0);
depth = (lightViewMat * vert).z / (500 * 0.2); // 500 is far value, this line tunes the depth precision
gl_Position = lightViewProjMat * vert;
}
// pixel shader
#version 330
in float depth;
out vec2 out_depth;
void main() {
out_depth = vec2(depth, depth * depth);
}
The z component of the fragment shader built in variable gl_FragCoord contains the depth value in range [0.0, 1.0]. This is the value which you shoud store to the depth map:
out_depth = vec2(gl_FragCoord.z, depth * depth);
After the calculation
vec3 fragPos = texture(scenePosTexture, texCoord).xyz; // get 3D position of pixel
vec4 fragPosLightSpace = lightViewProjMat * vec4(fragPos, 1.0); // project it to light-space view: lightView * lightProjection
vec3 ndc_coord = fragPosLightSpace.xyz / fragPosLightSpace.w;
the variable ndc_coord contains a normalized device coordinate, where all components are in range [-1.0, 1.0].
The z component of the normalized device coordiante can be conveted to the depth value (if the depth range is [0.0, 1.0]), by
float currentDepth = ndc_coord.z * 0.5 + 0.5;
This value can be compared to the value from the depth map, because currentDepth and lightViewDepth are calcualted by the same view matrix and projection matrix:
moments = texture(shadowMapTexture, coord.xy).xy;
lightViewDepth = moments.x;
if (currentDepth <= lightViewDepth)
lit_factor = 1; // pixel is visible to the light
else
lit_factor = 0; // the light doesn't see this pixel
This is the depth you store in the shadow map:
depth = (lightViewMat * vert).z / (500 * 0.2);
This is the depth you compare the read back value to:
vec4 fragPosLightSpace = lightViewProjMat * vec4(fragPos, 1.0);
currentDepth = fragPosLightSpace.z;
If fragPos is in world space then I assume lightViewMat * vert == fragPos. You are compressing depth by dividing by 500 * 0.2, but that does not equal to fragPosLightSpace.z.
Hint: Write out the value of currentDepth in one channel and the value from the shadow map in another channel, you can then compare them visually or in RenderDoc or similar.
I'm following the tutorial by John Chapman (http://john-chapman-graphics.blogspot.nl/2013/01/ssao-tutorial.html) to implement SSAO in a deferred renderer. The input buffers to the SSAO shaders are:
World-space positions with linearized depth as w-component.
World-space normal vectors
Noise 4x4 texture
I'll first list the complete shader and then briefly walk through the steps:
#version 330 core
in VS_OUT {
vec2 TexCoords;
} fs_in;
uniform sampler2D texPosDepth;
uniform sampler2D texNormalSpec;
uniform sampler2D texNoise;
uniform vec3 samples[64];
uniform mat4 projection;
uniform mat4 view;
uniform mat3 viewNormal; // transpose(inverse(mat3(view)))
const vec2 noiseScale = vec2(800.0f/4.0f, 600.0f/4.0f);
const float radius = 5.0;
void main( void )
{
float linearDepth = texture(texPosDepth, fs_in.TexCoords).w;
// Fragment's view space position and normal
vec3 fragPos_World = texture(texPosDepth, fs_in.TexCoords).xyz;
vec3 origin = vec3(view * vec4(fragPos_World, 1.0));
vec3 normal = texture(texNormalSpec, fs_in.TexCoords).xyz;
normal = normalize(normal * 2.0 - 1.0);
normal = normalize(viewNormal * normal); // Normal from world to view-space
// Use change-of-basis matrix to reorient sample kernel around origin's normal
vec3 rvec = texture(texNoise, fs_in.TexCoords * noiseScale).xyz;
vec3 tangent = normalize(rvec - normal * dot(rvec, normal));
vec3 bitangent = cross(normal, tangent);
mat3 tbn = mat3(tangent, bitangent, normal);
// Loop through the sample kernel
float occlusion = 0.0;
for(int i = 0; i < 64; ++i)
{
// get sample position
vec3 sample = tbn * samples[i]; // From tangent to view-space
sample = sample * radius + origin;
// project sample position (to sample texture) (to get position on screen/texture)
vec4 offset = vec4(sample, 1.0);
offset = projection * offset;
offset.xy /= offset.w;
offset.xy = offset.xy * 0.5 + 0.5;
// get sample depth
float sampleDepth = texture(texPosDepth, offset.xy).w;
// range check & accumulate
// float rangeCheck = abs(origin.z - sampleDepth) < radius ? 1.0 : 0.0;
occlusion += (sampleDepth <= sample.z ? 1.0 : 0.0);
}
occlusion = 1.0 - (occlusion / 64.0f);
gl_FragColor = vec4(vec3(occlusion), 1.0);
}
The result is however not pleasing. The occlusion buffer is mostly all white and doesn't show any occlusion. However, if I move really close to an object I can see some weird noise-like results as you can see below:
This is obviously not correct. I've done a fair share of debugging and believe all the relevant variables are correctly passed around (they all visualize as colors). I do the calculations in view-space.
I'll briefly walk through the steps (and choices) I've taken in case any of you figure something goes wrong in one of the steps.
view-space positions/normals
John Chapman retrieves the view-space position using a view ray and a linearized depth value. Since I use a deferred renderer that already has the world-space positions per fragment I simply take those and multiply them with the view matrix to get them to view-space.
I take a similar approach for the normal vectors. I take the world-space normal vectors from a buffer texture, transform them to [-1,1] range and multiply them with transpose(inverse(mat3(..))) of view matrix.
The view-space position and normals are visualized as below:
This looks correct to me.
Orient hemisphere around normal
The steps to create the tbn matrix are the same as described in John Chapman's tutorial. I create the noise texture as follows:
std::vector<glm::vec3> ssaoNoise;
for (GLuint i = 0; i < noise_size; i++)
{
glm::vec3 noise(randomFloats(generator) * 2.0 - 1.0, randomFloats(generator) * 2.0 - 1.0, 0.0f);
noise = glm::normalize(noise);
ssaoNoise.push_back(noise);
}
...
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16F, 4, 4, 0, GL_RGB, GL_FLOAT, &ssaoNoise[0]);
I can visualize the noise in the fragment shader so that seems to work.
sample depths
I transform all samples from tangent to view-space (samples are random between [-1,1] on xy axis and [0,1] on z-axis and translate them to fragment's current view-space position (origin).
I then sample from linearized depth buffer (which I visualize below when looking close to an object):
and finally compare sampled depth values to current fragment's depth value and add occlusion values. Note that I do not perform a range-check since I don't believe that is the cause of this behavior and I'd rather keep it as minimal as possible for now.
I don't know what is causing this behavior. I believe it is somewhere in sampling the depth values. As far as I can tell I am working in the right coordinate system, linearized depth values are in view-space as well and all variables are set somewhat properly.