Quick background of where I'm at (to make sure we're on the same page, and sanity check if I'm missing/assuming something stupid):
Goal: I want to render my scene with shadows, using deferred lighting
and shadowmaps.
Struggle: finding clear and consistent documentation regarding how to use shadow2D and sampler2DShadow.
Here's what I'm currently doing:
In the fragment shader of my final rendering pass (the one that actually calculates final frag values), I have the MVP matrices from the pass from the light's point of view, the depth texture from said pass (aka the "shadow map"), and the position/normal/color textures from my geometry buffer.
From what I understand, I need to find what UV of the shadow map the position of the current fragment corresponds to. I do that by the following:
//Bring position value at fragment (in world space) to screen space from lights POV
vec4 UVinShadowMap = (lightProjMat * lightViewMat * vec4(texture(pos_tex, UV).xyz,1.0)).xy;
//Convert screen space to 'texture space' (from -1to1 to 0to1)
UVinShadowMap = (UVinShadowMap+1)/2;
Now that I have this UV, I can get the percieved 'depth' from the light's pov with
float depFromLightPOV = texture2D(shadowMap, UVinShadowMap).r;
and compare that against the distance between the position at the current fragment and the light:
float actualDistance = distance(texture2D(pos_tex, UV).xyz, lightPos);
The problem comes from that 'depth' is stored in values 0-1, and actual distance is in world coordinates. I've tried to do that conversion manually, but couldn't get it to work. And in searching online, it looks like the way I SHOULD be doing this is with a sampler2DShadow...
So here's my question(s):
What changes do I need to make to instead use shadow2D? What does shadow2D even do? Is it just more-or-less an auto-conversion-from-depth-to-world texture? Can I use the same depth texture? Or do I need to render the depth texture a different way? What do I pass in to shadow2D? The world-space position of the fragment I want to check? Or the same UV as before?
If all these questions can be answered in a simple documentation page, I'd love if someone could just post that. But I swear I've been searching for hours and can't find anything that simply says what the heck is going on with shadow2D!
Thanks!
First of all, what version of GLSL are you using?
Beginning with GLSL 1.30, there is no special texture lookup function (name anyway) for use with sampler2DShadow. GLSL 1.30+ uses a bunch of overloads of texture (...) that are selected based on the type of sampler passed and the dimensions of the coordinates.
Second, if you do use sampler2DShadow you need to do two things differently:
Texture comparison must be enabled or you will get undefined results
GL_TEXTURE_COMPARE_MODE = GL_COMPARE_REF_TO_TEXTURE
The coordinates you pass to texture (...) are 3D instead of 2D. The new 3rd coordinate is the depth value that you are going to compare.
Last, you should understand what texture (...) returns when using sampler2DShadow:
If this comparison passes, texture (...) will return 1.0, if it fails it will return 0.0. If you use a GL_LINEAR texture filter on your depth texture, then texture (...) will perform 4 depth comparisons using the 4 closest depth values in your depth texture and return a value somewhere in-between 1.0 and 0.0 to give an idea of the number of samples that passed/failed.
That is the proper way to do hardware anti-aliasing of shadow maps. If you tried to use a regular sampler2D with GL_LINEAR and implement the depth test yourself you would get a single averaged depth back and a boolean pass/fail result instead of the behavior described above for sampler2DShadow.
As for getting a depth value to test from a world-space position, you were on the right track (though you forgot perspective division).
There are three things you must do to generate a depth from a world-space position:
Multiply the world-space position by your (light's) projection and view matrices
Divide the resulting coordinate by its W component
Scale and bias the result (which will be in the range [-1,1]) into the range [0,1]
The final step assumes you are using the default depth range... if you have not called glDepthRange (...) then this will work.
The end result of step 3 serves as both a depth value (R) and texture coordinates (ST) for lookup into your depth map. This makes it possible to pass this value directly to texture (...). Recall that the first 2 components of the texture coordinates are the same as always, and that the 3rd is a depth value to test.
Related
I have a GLSL shader that draws a 3D curve given a set of Bezier curves (3d coordinates of points). The drawing itself is done as I want except the occlusion does not work correctly, i.e., under certain viewpoints, the curve that is supposed to be in the very front appears to be still occluded, and reverse: the part of a curve that is supposed to be occluded is still visible.
To illustrate, here are couple examples of screenshots:
Colored curve is closer to the camera, so it is rendered correctly here.
Colored curve is supposed to be behind the gray curve, yet it is rendered on top.
I'm new to GLSL and might not know the right term for this kind of effect, but I assume it is occlusion culling (update: it actually indicates the problem with depth buffer, terminology confusion!).
My question is: How do I deal with occlusions when using GLSL shaders?
Do I have to treat them inside the shader program, or somewhere else?
Regarding my code, it's a bit long (plus I use OpenGL wrapper library), but the main steps are:
In the vertex shader, I calculate gl_Position = ModelViewProjectionMatrix * Vertex; and pass further the color info to the geometry shader.
In the geometry shader, I take 4 control points (lines_adjacency) and their corresponding colors and produce a triangle strip that follows a Bezier curve (I use some basic color interpolation between the Bezier segments).
The fragment shader is also simple: gl_FragColor = VertexIn.mColor;.
Regarding the OpenGL settings, I enable GL_DEPTH_TEST, but it does not seem to have anything of what I need. Also if I put any other non-shader geometry on the scene (e.g. quad), the curves are always rendered on the top of it regardless the viewpoint.
Any insights and tips on how to resolve it and why it is happening are appreciated.
Update solution
So, the initial problem, as I learned, was not about finding the culling algorithm, but that I do not handle the calculation of the z-values correctly (see the accepted answer). I also learned that given the right depth buffer set-up, OpenGL handles the occlusions correctly by itself, so I do not need to re-invent the wheel.
I searched through my GLSL program and found that I basically set the z-values as zeros in my geometry shader when translating the vertex coordinates to screen coordinates (vec2( vertex.xy / vertex.w ) * Viewport;). I had fixed it by calculating the z-values (vertex.z/vertex.w) separately and assigned them to the emitted points (gl_Position = vec4( screenCoords[i], zValues[i], 1.0 );). That solved my problem.
Regarding the depth buffer settings, I didn't have to explicitly specify them since the library I use set them up by default correctly as I need.
If you don't use the depth buffer, then the most recently rendered object will be on top always.
You should enable it with glEnable(GL_DEPTH_TEST), set the function to your liking (glDepthFunc(GL_LEQUAL)), and make sure you clear it every frame with everything else (glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)).
Then make sure your vertex shader is properly setting the Z value of the final vertex. It looks like the simplest way for you is to set the "Model" portion of ModelViewProjectionMatrix on the CPU side to have a depth value before it gets passed into the shader.
As long as you're using an orthographic projection matrix, rendering should not be affected (besides making the draw order correct).
Im having a bit of trouble with getting a depth value that I'm storing in a Float texture (or rather i don't understand the values). Essentially I am creating a deffered renderer, and in one of the passes I am storing the depth in the alpha component of a floating point render target. The code for that shader looks something like this
Define the clip position as a varying
varying vec4 clipPos;
...
In the vertex shader assign the position
clipPos = gl_Position;
Now in the fragment shader I store the depth:
gl_FragColor.w = clipPos.z / clipPos.w;
This by and large works. When I access this render target in any subsequent shaders I can get the depth. I.e something like this:
float depth = depthMap.w;
Am i right to assume that 0.0 is right in front of the camera and 1 is in the distance? Because I am doing some fog calculations based on this but they don't seem to be correct.
fogFactor = smoothstep( fogNear, fogFar, depth );
fogNear and fogFar are uniforms I send to the shader. When the fogNear is set to 0, I would have thought I get a smooth transition of fog from right in front of the camera to its draw distance. However this is what I see:
When I set the fogNear to 0.995, then I get something more like what Im expecting:
Is that correct, it just doesn't seem right to me? (The scale of the geometry is not really small / too large and neither is the camera near and far too large. All the values are pretty reasonable)
There are two issues with your approach:
You assume the depth is in the range of [0,1], buit what you use is clipPos.z / clipPos.w, which is NDC z coord in the range [-1,1]. You might be better of by directly writing the window space z coord to your depth texture, which is in [0,1] and will simply be gl_FragCoord.z.
The more serious issue that you assume a linear depth mapping. However, that is not the case. The NDC and window space z value is not a linear representation of the distance to the camera plane. It is not surprisinng that anything you see in the screenshot is very closely to 1. Typical, fog calculations are done in eye space. However, since you only need the z coord here, you simply could store the clip space w coordinate - since typically, that is just -z_eye (look at the last row of your projection matrix). However, the resulting value will be not in any normailized range, but in [near,far] that you use in your projection matrix - but specifying fog distances in eye space units (which normally are indentical to world space units) is more intuitive anyway.
I had an idea for fog that I would like to implement in OpenGl: After the scene is rendered, a quad is rendered over the entire viewport. In the fragment shader, this quad samples the depth buffer at that location and changes its color/alpha in order to make that pixel as foggy as needs be.
Now I know I can render the scene with the depth buffer linked to a texture, render the scene normally and then render the fog, passing it that texture, but this is one rendering too many. I wish to be able to either
Directly access the current depth buffer from the fragment shader
Be able to render the scene once, both to the normal depth buffer/screen and to the texture for fog.
Is this possible?
What you're thinking of (accessing the target framebuffer for input) would result in a feedback loop which is forbidden.
(…), but this is one rendering too many.
Why do you think that? You don't have to render the whole scene a new, just the fog overlay on top of it.
I wish to be able to either
Directly access the current depth buffer from the fragment shader
If you want to access only the depth of the newly rendered fragment, just use gl_FragCoord.z, this variable (that should only be read to keep performance) holds the depth buffer value the new fragment will have.
See the GLSL Specification:
The variable gl_FragCoord is available as an input variable from within fragment shaders
and it holds the window relative coordinates (x, y, z, 1/w) values for the fragment.
If multi-sampling, this value can be for any location within the pixel, or one of the
fragment samples. The use of centroid in does not further restrict this value to be
inside the current primitive. This value is the result of the fixed functionality that
interpolates primitives after vertex processing to generate fragments. The z component
is the depth value that would be used for the fragment’s depth if no shader contained
any writes to gl_FragDepth. This is useful for invariance if a shader conditionally
computes gl_FragDepth but otherwise wants the fixed functionality fragment depth.
Be able to render the scene once, both to the normal depth buffer/screen and to the texture for fog.
What's so wrong with first rendering the scene normally, with depth going into a separate depth texture attachment, then render the fog, finally compositing them. The computational complexity does not increase by this. Just because it's more steps, it's not doing more work that in your imagined solution, since the individual steps become simpler.
distance camera-pixel:
float z = gl_FragCoord.z / gl_FragCoord.w;
the solution you think to is a common solution, but no need of a supplementary sampling with a quad, everything is already there to compute fog in one pass if depth buffer is enable:
here is a an implementation
const float LOG2 = 1.442695;
float z = gl_FragCoord.z / gl_FragCoord.w;
float fogFactor = exp2( -gl_Fog.density *
gl_Fog.density *
z *
z *
LOG2 );
fogFactor = clamp(fogFactor, 0.0, 1.0);
gl_FragColor = mix(gl_Fog.color, finalColor, fogFactor );
So I'm working on implementing shadow mapping. So far, I've rendered the geometry (depth, normals, colors) to a framebuffer from the camera's point of view, and rendered the depth of the geometry from the light's point of view. Now, I'm rendering the lighting from the camera's point of view, and for each fragment, I'm to compare its distance to the light, to the depth tex value from the render-from-the-lights-pov pass. If the distance is greater, it's in shadow. (Just recapping here to make sure there isn't anything I don't realize I don't understand).
So, to do this last step, I need to convert the depth value [0-1] to its eye-space value [0.1-100] (my near/far planes). (explanation here- Getting the true z value from the depth buffer).
Is there any reason to not instead just have the render-from-the-lights-pov pass just write to a texture the distance of the fragment to the camera (the z component) directly? Then we won't have to deal with the ridiculous conversion? Or am I missing something?
You can certainly write your own depth value to a texture, and many people do just that. The advantage of doing that is that you can choose whatever representation and mapping you like.
The downside is that you have to either a) still have a "real" depth buffer attached to your FBO (and therefore double up the bandwidth you're using for depth writing), or b) use GL_MIN/GL_MAX blending mode (depending on how you are mapping depth) and possibly miss out on early-z out optimizations.
This might be a simple question. As a newbie on GLSL, I would rather ask here.
Now, in the vertex shader, I can get the position in world coordinate system in the following way:
gl_Position = ftransform();
posWorld = gl_ModelViewMatrix * gl_Vertex;
The question is: now can I can the max/min value of the posWorld among all the vertices? So that I can get a range of the vertex depth, but not the range of depth buffer.
If this is not possible, how can I get the z value of near/far plane in world coordinate system?
with best regards,
Jian
Yes it is possible with OpenGL. I'm doing a similar technique for calculating object's bounding box on GPU. Here are the steps:
Arrange a render-buffer of size 1x1 type RGBA_32F in its own FBO. Set as a render target (no depth/stencil, just a single color plane). It can be a pixel of a bigger texture, in which case you'll need to setup the viewport correctly.
Clear with basic value. For 'min' it will be some huge number, for 'max' it's negative huge.
Set up the blending function 'min' or 'max' correspondingly with coefficients (1,1).
Draw your mesh with a shader that produces a point with (0,0,0,1) coordinate. Output the color containing your original vertex world position.
You can go further optimizing from here. For example, you can get both 'min' and 'max' in one draw call by utilizing the geometry shader and negating the position for one of the output pixels.
From what i know, i think this needs to be done manually with an algorithm based on parallel reduction. I would like someone to confirm if there exists or not an OpenGL or GLSL function that already does this.
on the other hand, you can have access to the normalized near/far planes within a fragment shader,
http://www.opengl.org/wiki/GLSL_Predefined_Variables#Fragment_shader_uniforms.
and with the help of some uniform variables you can get the world far/near.