SSAO not displaying correct results, mostly no visible occlusion - c++

I'm following the tutorial by John Chapman (http://john-chapman-graphics.blogspot.nl/2013/01/ssao-tutorial.html) to implement SSAO in a deferred renderer. The input buffers to the SSAO shaders are:
World-space positions with linearized depth as w-component.
World-space normal vectors
Noise 4x4 texture
I'll first list the complete shader and then briefly walk through the steps:
#version 330 core
in VS_OUT {
vec2 TexCoords;
} fs_in;
uniform sampler2D texPosDepth;
uniform sampler2D texNormalSpec;
uniform sampler2D texNoise;
uniform vec3 samples[64];
uniform mat4 projection;
uniform mat4 view;
uniform mat3 viewNormal; // transpose(inverse(mat3(view)))
const vec2 noiseScale = vec2(800.0f/4.0f, 600.0f/4.0f);
const float radius = 5.0;
void main( void )
{
float linearDepth = texture(texPosDepth, fs_in.TexCoords).w;
// Fragment's view space position and normal
vec3 fragPos_World = texture(texPosDepth, fs_in.TexCoords).xyz;
vec3 origin = vec3(view * vec4(fragPos_World, 1.0));
vec3 normal = texture(texNormalSpec, fs_in.TexCoords).xyz;
normal = normalize(normal * 2.0 - 1.0);
normal = normalize(viewNormal * normal); // Normal from world to view-space
// Use change-of-basis matrix to reorient sample kernel around origin's normal
vec3 rvec = texture(texNoise, fs_in.TexCoords * noiseScale).xyz;
vec3 tangent = normalize(rvec - normal * dot(rvec, normal));
vec3 bitangent = cross(normal, tangent);
mat3 tbn = mat3(tangent, bitangent, normal);
// Loop through the sample kernel
float occlusion = 0.0;
for(int i = 0; i < 64; ++i)
{
// get sample position
vec3 sample = tbn * samples[i]; // From tangent to view-space
sample = sample * radius + origin;
// project sample position (to sample texture) (to get position on screen/texture)
vec4 offset = vec4(sample, 1.0);
offset = projection * offset;
offset.xy /= offset.w;
offset.xy = offset.xy * 0.5 + 0.5;
// get sample depth
float sampleDepth = texture(texPosDepth, offset.xy).w;
// range check & accumulate
// float rangeCheck = abs(origin.z - sampleDepth) < radius ? 1.0 : 0.0;
occlusion += (sampleDepth <= sample.z ? 1.0 : 0.0);
}
occlusion = 1.0 - (occlusion / 64.0f);
gl_FragColor = vec4(vec3(occlusion), 1.0);
}
The result is however not pleasing. The occlusion buffer is mostly all white and doesn't show any occlusion. However, if I move really close to an object I can see some weird noise-like results as you can see below:
This is obviously not correct. I've done a fair share of debugging and believe all the relevant variables are correctly passed around (they all visualize as colors). I do the calculations in view-space.
I'll briefly walk through the steps (and choices) I've taken in case any of you figure something goes wrong in one of the steps.
view-space positions/normals
John Chapman retrieves the view-space position using a view ray and a linearized depth value. Since I use a deferred renderer that already has the world-space positions per fragment I simply take those and multiply them with the view matrix to get them to view-space.
I take a similar approach for the normal vectors. I take the world-space normal vectors from a buffer texture, transform them to [-1,1] range and multiply them with transpose(inverse(mat3(..))) of view matrix.
The view-space position and normals are visualized as below:
This looks correct to me.
Orient hemisphere around normal
The steps to create the tbn matrix are the same as described in John Chapman's tutorial. I create the noise texture as follows:
std::vector<glm::vec3> ssaoNoise;
for (GLuint i = 0; i < noise_size; i++)
{
glm::vec3 noise(randomFloats(generator) * 2.0 - 1.0, randomFloats(generator) * 2.0 - 1.0, 0.0f);
noise = glm::normalize(noise);
ssaoNoise.push_back(noise);
}
...
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16F, 4, 4, 0, GL_RGB, GL_FLOAT, &ssaoNoise[0]);
I can visualize the noise in the fragment shader so that seems to work.
sample depths
I transform all samples from tangent to view-space (samples are random between [-1,1] on xy axis and [0,1] on z-axis and translate them to fragment's current view-space position (origin).
I then sample from linearized depth buffer (which I visualize below when looking close to an object):
and finally compare sampled depth values to current fragment's depth value and add occlusion values. Note that I do not perform a range-check since I don't believe that is the cause of this behavior and I'd rather keep it as minimal as possible for now.
I don't know what is causing this behavior. I believe it is somewhere in sampling the depth values. As far as I can tell I am working in the right coordinate system, linearized depth values are in view-space as well and all variables are set somewhat properly.

Related

Mirrors with deferred rendering and ambient occlusion

As you can tell from the title, I'm trying to create the mirror reflection while using deferred rendering and ambient occlusion. For ambient occlusion I'm specifically using the ssao algorithm.
To create the mirror I use the basic idea of reflecting all the models to the other side of the mirror and then rendering only the parts visible through the mirror.
Using deferred rendering I decided to do this during the creation of the gBuffer. In order to achieve correct lighting of the reflected objects, I made sure that the positions and normals of the reflected objects in the gBuffer are the same with their 'non reflected' version. That way, both the actual models and their images will receive the same lighting.
My problem is now with the ssao algorithm. It seems that the reflected objects are calculated to be highly occluded and this results in black areas which you can see in the mirror:
I've noticed that these black areas appear only in places that are not in my view. Things that I can see without the mirror have no unexpected black spots on them.
Note that the data in the gBuffer are all in view space. So there must be a connection there. Maybe the random samples used during ssao or their normals are not calculated correctly.
So , this is the fragment shader for the ambient occlusion :
void main()
{
vec3 fragPos = texture(gPosition, TexCoords).xyz;
vec3 normal = texture(gNormal, TexCoords).rgb;
vec3 randomVec = texture(texNoise, TexCoords * noiseScale).xyz;
vec3 tangent = normalize(randomVec - normal * dot(randomVec, normal));
vec3 bitangent = cross(normal, tangent);
mat3 TBN = mat3(tangent, bitangent, normal);
float occlusion = 0.0;
float kernelSize=64;
for(int i = 0; i < kernelSize; ++i)
{
// get sample position
vec3 sample = TBN * samples[i]; // From tangent to view-space
sample = fragPos + sample * radius;
vec4 offset = vec4(sample, 1.0);
offset = projection * offset; // from view to clip-space
offset.xyz /= offset.w; // perspective divide
offset.xyz = offset.xyz * 0.5 + 0.5;
float sampleDepth = texture(gPosition, offset.xy).z;
float rangeCheck = smoothstep(0.0, 1.0, radius / abs(fragPos.z -
sampleDepth));
occlusion += (sampleDepth >= sample.z + bias ? 1.0 : 0.0) * rangeCheck;
}
occlusion = 1.0 - (occlusion / kernelSize);
//FragColor = vec4(1,1,1,1);
occl=vec4(occlusion,occlusion,occlusion,1);
}
Any ideas as to why these black areas appear or suggestions to correct them?
I could just ignore the ambient occlusion in the reflection but I'm not happy with that.
Maybe, if the ambient occlusion shader used the positions and normals of the reflected objects there would be no problem. But then I'll get into trouble of saving more things in the buffer so I gave up that idea for now.

Vertex shader doesn't work well with cloned objects

I'm using OpenGL to create a sphere (approximation):
I'm "inflating" the triangle to create an eight of a sphere:
I'm then drawing that octant four times, and each time rotating the model transoformation by 90°, to achieve a hemisphere:
Code related to drawing calls:
for (int i = 0; i < 4; i++) {
model_trans = glm::rotate(model_trans, glm::radians(i * 90.0f), glm::vec3(0.0f, 0.0f, 1.0f));
glUniformMatrix4fv(uniform_model, 1, GL_FALSE, glm::value_ptr(model_trans));
glDrawElementsBaseVertex(GL_TRIANGLES, sizeof(sphere_indices) / sizeof(sphere_indices[0]),
GL_UNSIGNED_INT, 0, (sizeof(grid_vertices)) / (ATTR_COUNT * sizeof(GLfloat)));
}
My goal is to color each vertex based on the angle of its projection in the XY-plane. Since I'm normalizing values, the resulting projection should behave like an trigonometric circle, x value being the cosine value of the angle with the positive end of x-axis. And because the cosine is an continuous function, my sphere should have continual color, too. However, it is not so:
Is this issue caused by cloning the object? That's the only thing I can think of, but it shouldn't matter, since the vertex shader only receives individual vertices. Speaking of which, here is my vertex shader:
#version 150 core
in vec3 position;
/* flat : the color will be sourced from the provoking vertex. */
flat out vec3 Color;
/* transformation matrices */
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main() {
gl_Position = projection * view * model * vec4(position, 1.0);
vec3 vector_proj = vec3(position.x, position.y, 0.0);
normalize(vector_proj);
/* Addition and division only for mapping range [-1, +1] to [0, 1] */
float cosine = (vector_proj.x + 1) / 2;
Color = vec3(cosine);
}
You want to calculate the color associated to the vertex from the world coordinate of the vertex position.
position is the model coordinate, but not the world coordinate. You have to apply the model matrix to position to transform from model space to world space, before calculating vector_proj:
vec4 world_pos = model * vec4(position, 1.0);
vec3 vector_proj = vec3( world_pos.xy, 0.0 );
The parameter to normalize is not an in-out parameter. It is an input parameter, the normalized result is returned from the function:
vector_proj = normalize(vector_proj);
You can simplify the code as follows:
void main()
{
vec4 world_pos = model * vec4(position, 1.0);
gl_Position = projection * view * world_pos;
vec2 vector_proj = normalize(world_pos.xy);
/* Addition and division only for mapping range [-1, +1] to [0, 1] */
float cosine = (vector_proj.x + 1) / 2;
Color = vec3(cosine);
}

OpenGL Computing Normals and TBN Matrix from Depth Buffer (SSAO implementation)

I'm implementing SSAO in OpenGL, following this tutorial: Jhon Chapman SSAO
Basically the technique described uses an Hemispheric kernel which is oriented along the fragment's normal. The view space z position of the sample is then compared to its screen space depth buffer value.
If the value in the depth buffer is higher, it means the sample ended up in a geometry so this fragment should be occluded.
The goal of this technique is to get rid of the classic implementation artifact where objects flat faces are greyed out.
I've have the same implementation with 2 small differencies
I'm not using a Noise texture to rotate my kernel, so I have banding artifacts, that's fine for now
I don't have access to a buffer with Per-pixel normals, so I have to compute my normal and TBN matrix only using the depth buffer.
The algorithm seems to be working fine, I can see the fragments being occluded, BUT I still have my faces greyed out...
IMO it's coming from the way I'm calculating my TBN matrix. The normals look OK but something must be wrong as my kernel doesn't seem to be properly aligned causing samples to end up in the faces.
Screenshots are with a Kernel of 8 samples and a radius of .1. the first is only the result of SSAO pass and the second one is the debug render of the generated normals.
Here is the code for the function that computes the Normal and TBN Matrix
mat3 computeTBNMatrixFromDepth(in sampler2D depthTex, in vec2 uv)
{
// Compute the normal and TBN matrix
float ld = -getLinearDepth(depthTex, uv);
vec3 x = vec3(uv.x, 0., ld);
vec3 y = vec3(0., uv.y, ld);
x = dFdx(x);
y = dFdy(y);
x = normalize(x);
y = normalize(y);
vec3 normal = normalize(cross(x, y));
return mat3(x, y, normal);
}
And the SSAO shader
#include "helper.glsl"
in vec2 vertTexcoord;
uniform sampler2D depthTex;
const int MAX_KERNEL_SIZE = 8;
uniform vec4 gKernel[MAX_KERNEL_SIZE];
// Kernel Radius in view space (meters)
const float KERNEL_RADIUS = .1;
uniform mat4 cameraProjectionMatrix;
uniform mat4 cameraProjectionMatrixInverse;
out vec4 FragColor;
void main()
{
// Get the current depth of the current pixel from the depth buffer (stored in the red channel)
float originDepth = texture(depthTex, vertTexcoord).r;
// Debug linear depth. Depth buffer is in the range [1.0];
float oLinearDepth = getLinearDepth(depthTex, vertTexcoord);
// Compute the view space position of this point from its depth value
vec4 viewport = vec4(0,0,1,1);
vec3 originPosition = getViewSpaceFromWindow(cameraProjectionMatrix, cameraProjectionMatrixInverse, viewport, vertTexcoord, originDepth);
mat3 lookAt = computeTBNMatrixFromDepth(depthTex, vertTexcoord);
vec3 normal = lookAt[2];
float occlusion = 0.;
for (int i=0; i<MAX_KERNEL_SIZE; i++)
{
// We align the Kernel Hemisphere on the fragment normal by multiplying all samples by the TBN
vec3 samplePosition = lookAt * gKernel[i].xyz;
// We want the sample position in View Space and we scale it with the kernel radius
samplePosition = originPosition + samplePosition * KERNEL_RADIUS;
// Now we need to get sample position in screen space
vec4 sampleOffset = vec4(samplePosition.xyz, 1.0);
sampleOffset = cameraProjectionMatrix * sampleOffset;
sampleOffset.xyz /= sampleOffset.w;
// Now to get the depth buffer value at the projected sample position
sampleOffset.xyz = sampleOffset.xyz * 0.5 + 0.5;
// Now can get the linear depth of the sample
float sampleOffsetLinearDepth = -getLinearDepth(depthTex, sampleOffset.xy);
// Now we need to do a range check to make sure that object
// outside of the kernel radius are not taken into account
float rangeCheck = abs(originPosition.z - sampleOffsetLinearDepth) < KERNEL_RADIUS ? 1.0 : 0.0;
// If the fragment depth is in front so it's occluding
occlusion += (sampleOffsetLinearDepth >= samplePosition.z ? 1.0 : 0.0) * rangeCheck;
}
occlusion = 1.0 - (occlusion / MAX_KERNEL_SIZE);
FragColor = vec4(vec3(occlusion), 1.0);
}
Update 1
This variation of the TBN calculation function gives the same results
mat3 computeTBNMatrixFromDepth(in sampler2D depthTex, in vec2 uv)
{
// Compute the normal and TBN matrix
float ld = -getLinearDepth(depthTex, uv);
vec3 a = vec3(uv, ld);
vec3 x = vec3(uv.x + dFdx(uv.x), uv.y, ld + dFdx(ld));
vec3 y = vec3(uv.x, uv.y + dFdy(uv.y), ld + dFdy(ld));
//x = dFdx(x);
//y = dFdy(y);
//x = normalize(x);
//y = normalize(y);
vec3 normal = normalize(cross(x - a, y - a));
vec3 first_axis = cross(normal, vec3(1.0f, 0.0f, 0.0f));
vec3 second_axis = cross(first_axis, normal);
return mat3(normalize(first_axis), normalize(second_axis), normal);
}
I think the problem is probably that you are mixing coordinate systems. You are using texture coordinates in combination with the linear depth. You can imagine two vertical surfaces facing slightly to the left of the screen. Both have the same angle from the vertical plane and should thus have the same normal right?
But let's then imagine that one of these surfaces are much further from the camera. Since fFdx/fFdy functions basically tell you the difference from the neighbor pixel, the surface far away from the camera will have greater linear depth difference over one pixel, than the surface close to the camera. But the uv.x / uv.y derivative will have the same value. That means that you will get different normals depending on the distance from the camera.
The solution is to calculate the view coordinate and use the derivative of that to calculate the normal.
vec3 viewFromDepth(in sampler2D depthTex, in vec2 uv, in vec3 view)
{
float ld = -getLinearDepth(depthTex, uv);
/// I assume ld is negative for fragments in front of the camera
/// not sure how getLinearDepth is implemented
vec3 z_scaled_view = (view / view.z) * ld;
return z_scaled_view;
}
mat3 computeTBNMatrixFromDepth(in sampler2D depthTex, in vec2 uv, in vec3 view)
{
vec3 view = viewFromDepth(depthTex, uv);
vec3 view_normal = normalize(cross(dFdx(view), dFdy(view)));
vec3 first_axis = cross(view_normal, vec3(1.0f, 0.0f, 0.0f));
vec3 second_axis = cross(first_axis, view_normal);
return mat3(view_normal, normalize(first_axis), normalize(second_axis));
}

Getting World Position from Depth Buffer Value

I've been working on a deferred renderer to do lighting with, and it works quite well, albeit using a position buffer in my G-buffer. Lighting is done in world space.
I have tried to implement an algorithm to recreate the world space positions from the depth buffer, and the texture coordinates, albeit with no luck.
My vertex shader is nothing particularly special, but this is the part of my fragment shader in which I (attempt to) calculate the world space position:
// Inverse projection matrix
uniform mat4 projMatrixInv;
// Inverse view matrix
uniform mat4 viewMatrixInv;
// texture position from vertex shader
in vec2 TexCoord;
... other uniforms ...
void main() {
// Recalculate the fragment position from the depth buffer
float Depth = texture(gDepth, TexCoord).x;
vec3 FragWorldPos = WorldPosFromDepth(Depth);
... fun lighting code ...
}
// Linearizes a Z buffer value
float CalcLinearZ(float depth) {
const float zFar = 100.0;
const float zNear = 0.1;
// bias it from [0, 1] to [-1, 1]
float linear = zNear / (zFar - depth * (zFar - zNear)) * zFar;
return (linear * 2.0) - 1.0;
}
// this is supposed to get the world position from the depth buffer
vec3 WorldPosFromDepth(float depth) {
float ViewZ = CalcLinearZ(depth);
// Get clip space
vec4 clipSpacePosition = vec4(TexCoord * 2.0 - 1.0, ViewZ, 1);
// Clip space -> View space
vec4 viewSpacePosition = projMatrixInv * clipSpacePosition;
// Perspective division
viewSpacePosition /= viewSpacePosition.w;
// View space -> World space
vec4 worldSpacePosition = viewMatrixInv * viewSpacePosition;
return worldSpacePosition.xyz;
}
I still have my position buffer, and I sample it to compare it against the calculate position later, so everything should be black:
vec3 actualPosition = texture(gPosition, TexCoord).rgb;
vec3 difference = abs(FragWorldPos - actualPosition);
FragColour = vec4(difference, 0.0);
However, what I get is nowhere near the expected result, and of course, lighting doesn't work:
(Try to ignore the blur around the boxes, I was messing around with something else at the time.)
What could cause these issues, and how could I get the position reconstruction from depth working successfully? Thanks.
You are on the right track, but you have not applied the transformations in the correct order.
A quick recap of what you need to accomplish here might help:
Given Texture Coordinates [0,1] and depth [0,1], calculate clip-space position
Do not linearize the depth buffer
Output: w = 1.0 and x,y,z = [-w,w]
Transform from clip-space to view-space (reverse projection)
Use inverse projection matrix
Perform perspective divide
Transform from view-space to world-space (reverse viewing transform)
Use inverse view matrix
The following changes should accomplish that:
// this is supposed to get the world position from the depth buffer
vec3 WorldPosFromDepth(float depth) {
float z = depth * 2.0 - 1.0;
vec4 clipSpacePosition = vec4(TexCoord * 2.0 - 1.0, z, 1.0);
vec4 viewSpacePosition = projMatrixInv * clipSpacePosition;
// Perspective division
viewSpacePosition /= viewSpacePosition.w;
vec4 worldSpacePosition = viewMatrixInv * viewSpacePosition;
return worldSpacePosition.xyz;
}
I would consider changing the name of CalcViewZ (...) though, that is very much misleading. Consider calling it something more appropriate like CalcLinearZ (...).

Phong Illumination in OpenGL website

I was reading through the following Phong Illumination shader which has in opengl.org:
Phong Illumination in Opengl.org
The vertex and fragment shaders were as follows:
vertex shader:
varying vec3 N;
varying vec3 v;
void main(void)
{
v = vec3(gl_ModelViewMatrix * gl_Vertex);
N = normalize(gl_NormalMatrix * gl_Normal);
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
fragment shader:
varying vec3 N;
varying vec3 v;
void main (void)
{
vec3 L = normalize(gl_LightSource[0].position.xyz - v);
vec3 E = normalize(-v); // we are in Eye Coordinates, so EyePos is (0,0,0)
vec3 R = normalize(-reflect(L,N));
//calculate Ambient Term:
vec4 Iamb = gl_FrontLightProduct[0].ambient;
//calculate Diffuse Term:
vec4 Idiff = gl_FrontLightProduct[0].diffuse * max(dot(N,L), 0.0);
Idiff = clamp(Idiff, 0.0, 1.0);
// calculate Specular Term:
vec4 Ispec = gl_FrontLightProduct[0].specular
* pow(max(dot(R,E),0.0),0.3*gl_FrontMaterial.shininess);
Ispec = clamp(Ispec, 0.0, 1.0);
// write Total Color:
gl_FragColor = gl_FrontLightModelProduct.sceneColor + Iamb + Idiff + Ispec;
}
I was wondering about the way which he calculates the viewer vector or v. Because by multiplying the vertex position with gl_ModelViewMatrix, the result will be in view matrix (and view coordinates are rotated most of the time, compared to world coordinates).
So, we cannot simply subtract the light position from v to calculate the L vector, because they are not in the same coordinates system. Also, the result of dot product between L and N won't be correct because their coordinates are not the same. Am I right about it?
So, we cannot simply subtract the light position from v to calculate
the L vector, because they are not in the same coordinates system.
Also, the result of dot product between L and N won't be correct
because their coordinates are not the same. Am I right about it?
No.
The gl_LightSource[0].position.xyz is not the value you set GL_POSITION to. The GL will automatically multiply the position by the current GL_MODELVIEW matrix at the time of the glLight() call. Lighting calculations are done completely in eye space in fixed-function GL. So both V and N have to be transformed to eye space, and gl_LightSource[].position will already be transformed to eye-space, so the code is correct and is actually not mixing different coordinate spaces.
The code you are using is relying on deprecated functionality, using lots of the old fixed-function features of the GL, including that particular issue. In mordern GL, those builtin uniforms and attributes do not exist, and you have to define your own - and you can interpret them as you like.
You of course could also ignore that convention and still use a different coordinate space for the lighting calculation with the builtins, and interpret gl_LightSource[].position differently by simply choosing some other matrix when setting a position (typically, the light's world space position is set while the GL_MODELVIEW matrix contains only the view transformation, so that the eye-space light position for some world-stationary light source emerges, but you can do whatever you like). However, the code as presented is meant to work as some "drop-in" replacement for the fixed-function pipeline, so it will interpret those builtin uniforms and attributes in the same way the fixed-function pipeline did.