2 pass effect in opengl - opengl

I try to create a 2 pass effect using FBO in OpenGL.
In the first pass, I write the depth in a color buffer (image 1):
Using the following in its vertex shader:
gl_Position = projection * view * gl_Vertex;
vec4 position = gl_Position/gl_Position.w;
position = position / 2.0 + 0.5;
float temp_depth = position.z;
gl_FrontColor = vec4(temp_depth,temp_depth,temp_depth,1);
In the second pass I try to use the texture from the previous pass and color the scene (image 2):
Here is the code in vertex shader:
vec4 shadow_coord = projection * view * gl_Vertex;
shadow_coord = shadow_coord / shadow_coord.w;
shadow_coord = shadow_coord / 2.0 + 0.5;
gl_FrontColor = texture2D(light_depth_texture, shadow_coord.xy);
The scene is consisted of a quad in the front of a cone. In both cases the fragment shader is gl_FragColor = gl_Color; The view and projection matrices in both cases are exactly the same defined at start. The problems is that there is a deviation in shadow_coord.xy.
As long as the view and projection values are exactly the same, shouldn't I get same result?
What can I do to fix it?

What resolution do you use for the texture you render into? And what kind of filtering? (seems linear, should be nearest). Also try to offset the coordinate you read with like:
// offset should be 0.5 / texture_resolution
gl_FrontColor = texture2D(light_depth_texture, shadow_coord.xy + offset);
And as the other commenters mentioned, 8 bit is not enough to store depth values, consider using a depth texture or a floating point format (like GL_R32F as in ARB_texture_rg).

Related

Mirrors with deferred rendering and ambient occlusion

As you can tell from the title, I'm trying to create the mirror reflection while using deferred rendering and ambient occlusion. For ambient occlusion I'm specifically using the ssao algorithm.
To create the mirror I use the basic idea of reflecting all the models to the other side of the mirror and then rendering only the parts visible through the mirror.
Using deferred rendering I decided to do this during the creation of the gBuffer. In order to achieve correct lighting of the reflected objects, I made sure that the positions and normals of the reflected objects in the gBuffer are the same with their 'non reflected' version. That way, both the actual models and their images will receive the same lighting.
My problem is now with the ssao algorithm. It seems that the reflected objects are calculated to be highly occluded and this results in black areas which you can see in the mirror:
I've noticed that these black areas appear only in places that are not in my view. Things that I can see without the mirror have no unexpected black spots on them.
Note that the data in the gBuffer are all in view space. So there must be a connection there. Maybe the random samples used during ssao or their normals are not calculated correctly.
So , this is the fragment shader for the ambient occlusion :
void main()
{
vec3 fragPos = texture(gPosition, TexCoords).xyz;
vec3 normal = texture(gNormal, TexCoords).rgb;
vec3 randomVec = texture(texNoise, TexCoords * noiseScale).xyz;
vec3 tangent = normalize(randomVec - normal * dot(randomVec, normal));
vec3 bitangent = cross(normal, tangent);
mat3 TBN = mat3(tangent, bitangent, normal);
float occlusion = 0.0;
float kernelSize=64;
for(int i = 0; i < kernelSize; ++i)
{
// get sample position
vec3 sample = TBN * samples[i]; // From tangent to view-space
sample = fragPos + sample * radius;
vec4 offset = vec4(sample, 1.0);
offset = projection * offset; // from view to clip-space
offset.xyz /= offset.w; // perspective divide
offset.xyz = offset.xyz * 0.5 + 0.5;
float sampleDepth = texture(gPosition, offset.xy).z;
float rangeCheck = smoothstep(0.0, 1.0, radius / abs(fragPos.z -
sampleDepth));
occlusion += (sampleDepth >= sample.z + bias ? 1.0 : 0.0) * rangeCheck;
}
occlusion = 1.0 - (occlusion / kernelSize);
//FragColor = vec4(1,1,1,1);
occl=vec4(occlusion,occlusion,occlusion,1);
}
Any ideas as to why these black areas appear or suggestions to correct them?
I could just ignore the ambient occlusion in the reflection but I'm not happy with that.
Maybe, if the ambient occlusion shader used the positions and normals of the reflected objects there would be no problem. But then I'll get into trouble of saving more things in the buffer so I gave up that idea for now.

Getting World Position from Depth Buffer Value

I've been working on a deferred renderer to do lighting with, and it works quite well, albeit using a position buffer in my G-buffer. Lighting is done in world space.
I have tried to implement an algorithm to recreate the world space positions from the depth buffer, and the texture coordinates, albeit with no luck.
My vertex shader is nothing particularly special, but this is the part of my fragment shader in which I (attempt to) calculate the world space position:
// Inverse projection matrix
uniform mat4 projMatrixInv;
// Inverse view matrix
uniform mat4 viewMatrixInv;
// texture position from vertex shader
in vec2 TexCoord;
... other uniforms ...
void main() {
// Recalculate the fragment position from the depth buffer
float Depth = texture(gDepth, TexCoord).x;
vec3 FragWorldPos = WorldPosFromDepth(Depth);
... fun lighting code ...
}
// Linearizes a Z buffer value
float CalcLinearZ(float depth) {
const float zFar = 100.0;
const float zNear = 0.1;
// bias it from [0, 1] to [-1, 1]
float linear = zNear / (zFar - depth * (zFar - zNear)) * zFar;
return (linear * 2.0) - 1.0;
}
// this is supposed to get the world position from the depth buffer
vec3 WorldPosFromDepth(float depth) {
float ViewZ = CalcLinearZ(depth);
// Get clip space
vec4 clipSpacePosition = vec4(TexCoord * 2.0 - 1.0, ViewZ, 1);
// Clip space -> View space
vec4 viewSpacePosition = projMatrixInv * clipSpacePosition;
// Perspective division
viewSpacePosition /= viewSpacePosition.w;
// View space -> World space
vec4 worldSpacePosition = viewMatrixInv * viewSpacePosition;
return worldSpacePosition.xyz;
}
I still have my position buffer, and I sample it to compare it against the calculate position later, so everything should be black:
vec3 actualPosition = texture(gPosition, TexCoord).rgb;
vec3 difference = abs(FragWorldPos - actualPosition);
FragColour = vec4(difference, 0.0);
However, what I get is nowhere near the expected result, and of course, lighting doesn't work:
(Try to ignore the blur around the boxes, I was messing around with something else at the time.)
What could cause these issues, and how could I get the position reconstruction from depth working successfully? Thanks.
You are on the right track, but you have not applied the transformations in the correct order.
A quick recap of what you need to accomplish here might help:
Given Texture Coordinates [0,1] and depth [0,1], calculate clip-space position
Do not linearize the depth buffer
Output: w = 1.0 and x,y,z = [-w,w]
Transform from clip-space to view-space (reverse projection)
Use inverse projection matrix
Perform perspective divide
Transform from view-space to world-space (reverse viewing transform)
Use inverse view matrix
The following changes should accomplish that:
// this is supposed to get the world position from the depth buffer
vec3 WorldPosFromDepth(float depth) {
float z = depth * 2.0 - 1.0;
vec4 clipSpacePosition = vec4(TexCoord * 2.0 - 1.0, z, 1.0);
vec4 viewSpacePosition = projMatrixInv * clipSpacePosition;
// Perspective division
viewSpacePosition /= viewSpacePosition.w;
vec4 worldSpacePosition = viewMatrixInv * viewSpacePosition;
return worldSpacePosition.xyz;
}
I would consider changing the name of CalcViewZ (...) though, that is very much misleading. Consider calling it something more appropriate like CalcLinearZ (...).

Deferred Shadow Mapping GLSL

Im currently implementing a deferred rendering pipeline and im stuck with shadow mapping.
Ive already implemented it succesfully into a forward pipeline.
The Steps i do are:
Get Position in Light View
Convert to light view clip space
Get shadow tex coords with * 0.5 + 0.5;
check depth
Edit: Updated code with new result image:
float checkShadow(vec3 position) {
// get position in light view
mat4 invView = inverse(cameraView);
vec4 pEyeDir = sunBias * sunProjection * sunView * invView * vec4(position, 1.0);
// light view clip space
pEyeDir = pEyeDir / pEyeDir.w;
// get uv coordinates
vec2 sTexCoords = pEyeDir.xy * 0.5 + 0.5;
float bias = 0.0001;
float depth = texture(sunDepthTex, sTexCoords).r - bias;
float shadow = 1.0f;
if(pEyeDir.z * 0.5 + 0.5 > depth)
{
shadow = 0.3f;
}
return shadow;
}
here some variables important for the code above:
vec3 position = texture(positionTex, uv).rgb;
Also i get a dark background( meshes stay the same) at some camera positions, only happens when i multiply the shadow value to the final color.
As requested, here are the position and sun depth texture:
Ok i fixed it. The problem was because the light depth had a different size than the gbuffer textures.
To use different texture sizes i had to normalize them with
coords = (coords / imageSize ) * windowSize;

Texture repeating and clamping in shader

I have the following fragment and vertex shader, in which I repeat a texture:
//Fragment
vec2 texcoordC = gl_TexCoord[0].xy;
texcoordC *= 10.0;
texcoordC.x = mod(texcoordC.x, 1.0);
texcoordC.y = mod(texcoordC.y, 1.0);
texcoordC.x = clamp(texcoordC.x, 0.0, 0.9);
texcoordC.y = clamp(texcoordC.y, 0.0, 0.9);
vec4 texColor = texture2D(sampler, texcoordC);
gl_FragColor = texColor;
//Vertex
gl_TexCoord[0] = gl_MultiTexCoord0;
colorC = gl_Color.r;
gl_Position = ftransform();
ADDED: After this process, I fetch the texture coordinates and use a texture pack:
vec4 textureGet(vec2 texcoord) {
// Tile is 1.0/16.0 part of texture, on x and y
float tileSp = 1.0 / 16.0;
vec4 color = texture2D(sampler, texcoord);
// Get tile x and y by red color stored
float texTX = mod(color.r, tileSp);
float texTY = color.r - texTX;
texTX /= tileSp;
// Testing tile
texTX = 1.0 - tileSp;
texTY = 1.0 - tileSp;
vec2 savedC = color.yz;
// This if else statement can be ignored. I use time to move the texture. Seams show without this as well.
if (color.r > 0.1) {
savedC.x = mod(savedC.x + sin(time / 200.0 * (color.r * 3.0)), 1.0);
savedC.y = mod(savedC.y + cos(time / 200.0 * (color.r * 3.0)), 1.0);
} else {
savedC.x = mod(savedC.x + time * (color.r * 3.0) / 1000.0, 1.0);
savedC.y = mod(savedC.y + time * (color.r * 3.0) / 1000.0, 1.0);
}
vec2 texcoordC = vec2(texTX + savedC.x * tileSp, texTY + savedC.y * tileSp);
vec4 res = texture2D(texturePack, texcoordC);
return res;
}
I have some troubles with showing seams (of 1 pixel it seems) however. If I leave out texcoord *= 10.0 no seams are shown (or barely), if I leave it in they appear. I clamp the coordinates (even tried lower than 1.0 and bigger than 0.0) to no avail. I strongly have the feeling it is a rounding error somewhere, but I have no idea where. ADDED: Something to note is that in the actual case I convert the texcoordC x and y to 8 bit floats. I think the cause lies here; I added another shader describing this above.
The case I show is a little more complicated in reality, so there is no use for me to do this outside the shader(!). I added the previous question which explains a little about the case.
EDIT: As you can see the natural texture span is divided by 10, and the texture is repeated (10 times). The seams appear at the border of every repeating texture. I also added a screenshot. The seams are the very thin lines (~1pixel). The picture is a cut out from a screenshot, not scaled. The repeated texture is 16x16, with 256 subpixels total.
EDIT: This is a followup question of: this question, although all necessary info should be included here.
Last picture has no time added.
Looking at the render of the UV coordinates, they are being filtered, which will cause the same issue as in your previous question, but on a smaller scale. What is happening is that by sampling the UV coordinate texture at a point between two discontinuous values (i.e. two adjacent points where the texture coordinates wrapped), you get an interpolated value which isn't in the right part of the texture. Thus the boundary between texture tiles is a mess of pixels from all over that tile.
You need to get the mapping 1:1 between screen pixels and the captured UV values. Using nearest sampling might get you some of the way there, but it should be possible to do without using that, if you have the right texture and pixel coordinates in the first place.
Secondly, you may find you get bleeding effects due to the way you are doing the texture atlas lookup, as you don't account for the way texels are sampled. This will be amplified if you use any mipmapping. Ideally you need a border, and possibly some massaging of the coordinates to account for half-texel offsets. However I don't think that's the main issue you're seeing here.

draw the depth value in opengl using shaders

I want to draw the depth buffer in the fragment shader, I do this:
Vertex shader:
varying vec4 position_;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
position_ = gl_ModelViewProjectionMatrix * gl_Vertex;
Fragment shader:
float depth = ((position_.z / position_.w) + 1.0) * 0.5;
gl_FragColor = vec4(depth, depth, depth, 1.0);
But all I print is white, what am I doing wrong?
In what space do you want to draw the depth? If you want to draw the window-space depth, you can do this:
gl_FragColor = vec4(gl_FragCoord.z);
However, this will not be particularly useful, since most of the numbers will be very close to 1.0. Only extremely close objects will be visible. This is the nature of the distribution of depth values for a depth buffer using a standard perspective projection.
Or, to put it another way, that's why you're getting white.
If you want these values in a linear space, you will need to do something like the following:
float ndcDepth = ndcPos.z =
(2.0 * gl_FragCoord.z - gl_DepthRange.near - gl_DepthRange.far) /
(gl_DepthRange.far - gl_DepthRange.near);
float clipDepth = ndcDepth / gl_FragCoord.w;
gl_FragColor = vec4((clipDepth * 0.5) + 0.5);
Indeed, the "depth" value of a fragment can be read from it's z value in clip space (that is, after all matrix transformations). That much is correct.
However, your problem is in the division by w.
Division by w is called perspective divide. Yes, it is necessary for perspective projection to work correctly.
However. Division by w in this case "bunches up" all your values (as you have seen), to being very close to 1.0. There is a good reason for this: in a perspective projection, w= (some multiplier) *z. That is, you are dividing the z value (whatever it was computed out to be) by the (some factor of) original z. No wonder you always get values near 1.0. You're almost dividing z by itself.
As a very simple fix for this, try dividing z just by the farPlane, and send that to the fragment shader as depth.
Vertex shader
varying float DEPTH ;
uniform float FARPLANE ; // send this in as a uniform to the shader
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
DEPTH = gl_Position.z / FARPLANE ; // do not divide by w
Fragment shader:
varying float DEPTH ;
// far things appear white, near things black
gl_Color.rgb=vec3(DEPTH,DEPTH,DEPTH) ;
The result is a not-bad, very linear-looking fade.