OpenGL GLSL SSAO Implementation - opengl

I try to implement Screen Space Ambient Occlusion (SSAO) based on the R5 Demo found here: http://blog.nextrevision.com/?p=76
In Fact I try to adapt their SSAO - Linear shader to fit into my own little engine.
1) I calculate View Space surface normals and Linear depth values.
I Store them in a RGBA texture using the following shader:
Vertex:
varNormalVS = normalize(vec3(vmtInvTranspMatrix * vertexNormal));
depth = (modelViewMatrix * vertexPosition).z;
depth = (-depth-nearPlane)/(farPlane-nearPlane);
gl_Position = pvmtMatrix * vertexPosition;
Fragment:
gl_FragColor = vec4(varNormalVS.x,varNormalVS.y,varNormalVS.z,depth)
For my linear depth calculation I referred to: http://www.gamerendering.com/2008/09/28/linear-depth-texture/
Is it correct?
Texture seem to be correct, but maybe it is not?
2) The actual SSAO Implementation:
As mentioned above the original can be found here: http://blog.nextrevision.com/?p=76
or faster: on pastebin http://pastebin.com/KaGEYexK
In contrast to the original I only use 2 input textures since one of my textures stores both, normals as RGB and Linear Depht als Alpha.
My second Texture, the random normal texture, looks like this:
http://www.gamerendering.com/wp-content/uploads/noise.png
I use almost exactly the same implementation but my results are wrong.
Before going into detail I want to clear some questions first:
1) ssao shader uses projectionMatrix and it's inverse matrix.
Since it is a post processing effect rendered onto a screen aligned quad via orthographic projection, the projectionMatrix is the orthographic matrix. Correct or Wrong?
2) Having a combined normal and Depth texture instead of two seperate ones.
In my opinion this is the biggest difference between the R5 implementation and my implementation attempt. I think this should not be a big problem, however, due to different depth textures this is most likley to cause problems.
Please note that R5_clipRange looks like this
vec4 R5_clipRange = vec4(nearPlane, farPlane, nearPlane * farPlane, farPlane - nearPlane);
Original:
float GetDistance (in vec2 texCoord)
{
//return texture2D(R5_texture0, texCoord).r * R5_clipRange.w;
const vec4 bitSh = vec4(1.0 / 16777216.0, 1.0 / 65535.0, 1.0 / 256.0, 1.0);
return dot(texture2D(R5_texture0, texCoord), bitSh) * R5_clipRange.w;
}
I have to admit I do not understand the code snippet. My depth his stored in the alpha of my texture and I thought it should be enought to just do this
return texture2D(texSampler0, texCoord).a * R5_clipRange.w;
Correct or Wrong?

Your normal texture seems wrong. My guess is that your vmtInvTranspMatrix is a model-view matrix. However it should be model-view-projection matrix (note you need screen space normals, not view space normals). The depth calculation is correct.
I've implemented SSAO once and the normal texture looks like this (note there is no blue here):
1) ssao shader uses projectionMatrix and it's inverse matrix.
Since it is a post processing effect rendered onto a screen aligned quad via orthographic projection, the projectionMatrix is the orthographic matrix. Correct or Wrong ?
If you mean the second pass where you are rendering a quad to compute the actual SSAO, yes. You can avoid the multiplication by the orthogonal projection matrix altogether. If you render screen quad with [x,y] dimensions ranging from -1 to 1, you can use really simple vertex shader:
const vec2 madd=vec2(0.5,0.5);
void main(void)
{
gl_Position = vec4(in_Position, -1.0, 1.0);
texcoord = in_Position.xy * madd + madd;
}
2) Having a combined normal and Depth texture instead of two seperate
ones.
Nah, that won't cause problems. It's a common practice to do so.

Related

How to get homogeneous screen space coordinates in openGL

I'm studying opengl and I'v got i little 3d scene with some objects. In GLSL vertex shader I multiply vertexes on matixes like this:
vertexPos= viewMatrix * worldMatrix * modelMatrix * gl_Vertex;
gl_Position = vertexPos;
vertexPos is a vec4 varying variable and I pass it to fragment shader.
Here is how the scene renders normaly:
normal render
But then I wana do a debug render. I write in fragment shader:
gl_FragColor = vec4(vertexPos.x, vertexPos.x, vertexPos.x, 1.0);
vertexPos is multiplied by all matrixes, including perspective matrix, and I assumed that I would get a smooth gradient from the center of the screen to the right edge, because they are mapped in -1 to 1 square. But look like they are in screen space but perspective deformation isn't applied. Here is what I see:
(dont look at red line and light source, they are using different shader)
debug render
If I devide it by about 15 it will look like this:
gl_FragColor = vec4(vertexPos.x, vertexPos.x, vertexPos.x, 1.0)/15.0;
devided by 15
Can someone please explain me, why the coordinates aren't homogeneous and the scene still renders correctly with perspective distortion?
P.S. if I try to put gl_Position in fragment shader instead of vertexPos, it doesn't work.
A so-called perspective division is applied to gl_Position after it's computed in a vertex shader:
gl_Position.xyz /= gl_Position.w;
But it doesn't happen to your varyings unless you do it manually. Thus, you need to add
vertexPos.xyz /= vertexPos.w;
at the end of your vertex shader. Make sure to do it after you copy the value to gl_Position, you don't want to do the division twice.

How to make radial gradient on each face using shader in OpenGL

using simple shaders I've found a way to create gradients.
Here's result of my job:
http://goo.gl/A7pY01 (A little updated after OpenGL ES 2.0 Shader - 2D Radial Gradient in Polygon question)
It's nice, but I still need to display this gradient pattern on each face of my meshes. Or on the billboard face, just like it's a texture.
The glsl function gl_FragCoord returns window-related coordinates. Could someone explain me the way how to translate this into face-related coords and then draw my pattern?
Okey. A little surfing of stackoverflow gave me this topic: OpenGL: How to render perfect rectangular gradient?
Here is the meaning string: gl_FragColor = mix(color0, color1, uv.u + uv.v - 2 * uv.u * uv.v);
Of course we cannot translate window-space coordinates into something "face-related", but we could use UV coordinates of a face. So, I decided, what if we have a square face with uv-coordinates corresponding to full-sized texture (like 0,0; 0,1; 1,0; 1,1); So the center of a structure is 0.5,0.5. This could be a center of my round-gradient.
so my code of fragment shader is:
vec2 u_c = vec2(0.5,0.5);
float distanceFromLight = length(uv - u_c);
gl_FragColor = mix(vec4(1.,0.5,1.,1.), vec4(0.,0.,0.,1.), distanceFromLight*2.0);
Vertex shader:
gl_Position = _mvProj * vec4(vertex, 1.0);
uv = uv1;
Of course, we need to give correct UV coordinates, but the point is understood.
Here's example:
http://goo.gl/A7pY01

reconstructed world position from depth is wrong

I'm trying to implement deferred shading/lighting. In order to reduce the number/size of the buffers I use I wanted to use the depth texture to reconstruct world position later on.
I do this by multiplying the pixel's coordinates with the inverse of the projection matrix and the inverse of the camera matrix. This sort of works, but the position is a bit off. Here's the absolute difference with a sampled world position texture:
For reference, this is the code I use in the second pass fragment shader:
vec2 screenPosition_texture = vec2((gl_FragCoord.x)/WIDTH, (gl_FragCoord.y)/HEIGHT);
float pixelDepth = texture2D(depth, screenPosition_texture).x;
vec4 worldPosition = pMatInverse*vec4(VertexIn.position, pixelDepth, 1.0);
worldPosition = vec4(worldPosition.xyz/worldPosition.w, 1.0);
//worldPosition /= 1.85;
worldPosition = cMatInverse*worldPosition_byDepth;
If I uncomment worldPosition /= 1.85, the position is reconstructed a lot better (on my geometry/range of depth values). I just got this value by messing around after comparing my output with what it should be (stored in a third texture).
I'm using 0.1 near, 100.0 far and my geometries are up to about 15 away.
I know there may be precision errors, but this seems a bit too big of an error too close to the camera.
Did I miss anything here?
As mentioned in a comment:
I didn't convert the depth value from NDC space to clip space.
I should have added this line:
pixelDepth = pixelDepth * 2.0 - 1.0;

Is a position buffer required for deferred rendering?

I'm trying to avoid the uses of a Position Buffer by projecting Screen Space Points back into View Space to use with lighting. I have tried multiplying by the inverse projection matrix, but this does not give back the View Space point. Is it worth it to add matrix multiplication to avoid the Position Buffer?
Final-pass Shader:
vec3 ScreenSpace = vec3(0.0,0.0,0.0);
ScreenSpace.xy = (texcoord.xy * 2.0) - 1.0;
ScreenSpace.z = texture2D(depthtex, texcoord.xy);
vec4 ViewSpace = InvProjectionMatrix * vec3(ScreenSpace, 1.0);
ViewSpace.xyz = ViewSpace.w;
Most of your answer can be found on this answer, which is far too long and involved to repost. However, part of your problem is that you're using texcoord and not gl_FragCoord.
You want to use gl_FragCoord, because this is guaranteed by OpenGL to be the right value (assuming your deferred pass and your lighting pass use images with the same size), no matter what. Also, it keeps you from having to pass a value from the vertex shader to the fragment shader.
The downside is that you need the size of the output screen to interpret it. But that's easy enough, assuming again that the two passes use images of the same size:
ivec2 size = textureSize(depthtex, 0);
You can use size for the size of the viewport to convert gl_FragCoord.xy into texture coordinates and window-space positions.

Apply custom projectionmatrix (to texturecoordinate) in GLSL

I want to view a flat fullscreen texture as it is spherical, by transforming it in a postprocess shader.
I figure I have to apply a projectionmatrix to the texture coordinate in the shader.
I found this website: http://www.songho.ca/opengl/gl_projectionmatrix.html which learns me a lot about the inners of the projectionmatrix.
But how do I apply it? I thought I would have to multiply the third row of the projection matrix to the texture coordinate with a calculated z value added to make it spherical. My efforts don't show any result though.
EDIT: I see the same issue here: http://lists.openscenegraph.org/pipermail/osg-users-openscenegraph.org/2008-April/009765.html
I think after you multiply text coords by projection matrix you have to make a perspective division and move from 3D to 2D (since the texture is 2D). This is the same as with shadow mapping.
// in fragment shader:
vec4 proj = uniformModelViewProjMatrix * tex_coords;
proj.xyz /= proj.w;
proj.xyz += vec3(1.0);
proj.xyz *= 0.5;
vec4 col = texture2D(sampler, proj.xy);
or look at http://www.ozone3d.net/tutorials/glsl_texturing_p08.php (for texture2DProj)