after having implemented shadows for spotlight it appears that the bias computaion make the shadow disappear when my spotlight is too far from objects.
I have been trying to solve this problem for two days and I use Renderdoc to debug my renderer so all data are correct inside the shader.
My Case:
I use a 32 bits depth buffer
I have two cubes, one behind the other (and bigger to see the shadow of its neighboor), and a light looking toward cubes, they are aligned along the z-axis.
I used the following formula found on a tutorial to calculate the bias:
float bias = max(max_bias * (1.0 - dot(normal, lightDir)), min_bias);
And I perform the following comparison:
return (fragment_depth - shadow_texture_depth - bias > 0.0) ? 0.0 : 1.0;
However the more my spotlight is far from objects, the more depth value of the closest cube is close to the depth of the farest cube (difference of 10-3 and it decrease with distance from light).
Everything is working, the perspective made its job.
But the bias calculation doesn't take the distance from fragment to light into account, then if my objects and my light are aligned, normal and lightDir don't change, therefore the bias don't change either : there is no more shadow on my farest cube because the bias doesn't suit anymore.
I have searched on many websites and books (all game programming gems), but I didn't find useful formula.
Here I show you two cases:
Here you have two pair of screenshot the colour result from the camera point of view and the shadowmap from the light point of view.
light position (0, 0, 0), everything works
light position (0, 0, 1.5), doesn't works
Does anybody have a formula or an idea to help me ?
Did I misunderstand something ?
Thanks for reading.
You bias the difference which is in post projective space.
The post projective space is non linear as the depth buffer is logarithmic. So you cannot just offset this difference with this bias which is in "world unit".
If you want to make it work , you have to reconstruct your sampling position with this normal offset.
Or transform your depthes in world space using the inverted projection.
Hope it can help you !
Related
Hullo, I want to implement a simple 2D lighting technique in GLSL. My projection matrix is set up so that the top left corner of the window is (0, 0) and the bottom right is (window.width, window.height). I have one uniform variable in the fragment shader uniform vec2 lightPosition; which is currently set to the mouse position (again, in the same coordinate system). I have also calculated the distance from the light to the pixel.
I want to light up the pixel according to its distance from the light source. But here's the catch, I don't want to light it up more than its original color. For instance if the color of the pixel is (1, 0, 0 (red)), no matter how close the light gets to it, it will not change more that that, which adds annoying specularity. And the farther the light source moves away from the pixel, the darker I want it to get.
I really feel that I'm close to getting what I want, but I just can't get it!
I would really appreciate some help. I feel that this is a rather simple code to implement (and I feel ashamed for not knowing it).
why not scale up the distance to <0..1> range by dividing it and max it by some max visibility distance vd so:
d = min( length(fragment_pos-light_pos) , vd ) / vd;
that should get you <0..1> range for the distance of fragment to light. Now you can optionaly perform simple nonlinearization if you want (using pow which does not change the range...)
d = pow(d,0.5);
or
d = pow(d,2.0);
depending on what you think looks better (you can play with the exponent ...) and finally compute the color:
col = face_color * ((1.0-d)*0.8 + 0.2);
where 0.8 is your lightsource strength and 0.2 is ambient lighting.
I'm trying to mix raymarching and usual geometry rendering but I don't understand how to correctly compare the distance from the ray with the depth value I stored on my buffer.
On the raymarching side, I have rays starting from the eye (red dot). If you only render fragments at fixed ray distance 'T', you will see a curved line (in yellow on my drawing). I understand this because if you start from the ray origins Oa and Ob, and follow the direction Da and Db during T units (Oa + T * Da, and Ob + T * Db), you see that only the ray at the middle of the screen reaches the blue plane.
Now on the geometry side, I stored values directly from gl_FragCoord.z. But I don't understand why we don't see this curved effect there. I edited this picture in gimp, playing with 'posterize' feature to make it clear.
We see straight lines, not curved lines.
I am OK with converting this depth value to a distance (taking into account of the linearization). But then when comparing both distance I've got problems on the side of my screen.
There is one thing I missing with projection and how depth value are stored... I supposed depth value was the (remapped) distance from the near plane following the direction of rays starting from the eye.
EDIT:
It seems writing this question helped me a bit. I see now why we don't see this the curve effect on depth buffer: because the distance between Near and Far is bigger for the ray on the side of my screen. Thus even if the depth values are the same (middle or side screen), the distances are not.
Thus it seems my problem come from the way I convert depth to distance. I used the following:
float z_n = 2.0 * mydepthvalue - 1.0;
float z_e = 2.0 * zNear * zFar / (zFar + zNear - z_n * (zFar - zNear));
However, after the conversion I still don't see this curve effect that I need for comparing with my ray distance. Note this code doesn't take into account the FragCoord.x and .y, and that's weird for me...
You're overcomplicating the matter. There is no "curve effect" because a plane isn't curved. The term z value litteraly describes what it is: the z coordinate in some euclidean coordinate space. And in such a space z=C will form a parallel to the plane spanned by the xy-Axes of that space, at distance C.
So if you want the distance to some point, you need to take the x and y coordinates into account as well. In eye space, the camera is typically at the origin, so distance to the camera boils down to length(x_e, y_e, z_e) (which is of course a non-linear operation which would create the "curve" you seem to expect).
I am drawing a stack of decals on a quad. Same geometry, different textures. Z-fighting is the obvious result. I cannot control the rendering order or use glPolygonoffset due to batched rendering. So I adjust depth values inside the vertex shader.
gl_Position = uMVPMatrix * pos;
gl_Position.z += aDepthLayer * uMinStep * gl_Position.w;
gl_Position holds clip coordinates. That means a change in z will move a vertex along its view ray and bring it to the front or push it to the back. For normalized device coordinates the clip coords get divided by gl_Position.w (=-Zclip). As a result the depth buffer does not have linear distribution and has higher resolution towards the near plane. By premultiplying gl_Position.w that should be fixed and I should be able to apply a flat amount (uMinStep) to the NDC.
That minimum step should be something like 1/(2^GL_DEPTH_BITS -1). Or, since NDC space goes from -1.0 to 1.0, it might have to be twice that amount. However it does not work with these values. The minStep is roughly 0.00000006 but it does not bring a texture to the front. Neither when I double that value. If I drop a zero (scale by 10), it works. (Yay, thats something!)
But it does not work evenly along the frustum. A value that brings a texture in front of another while the quad is close to the near plane does not necessarily do the same when the quad is close to the far plane. The same effect happens when I make the frustum deeper. I would expect that behaviour if I was changing eye coordinates, because of the nonlinear z-Buffer distribution. But it seems that premultiplying gl_Position.w is not enough to counter that.
Am I missing some part of the transformations that happen to clip coords? Do I need to use a different formula in general? Do I have to include the depth range [0,1] somehow?
Could the different behaviour along the frustum be a result of nonlinear floating point precision instead of nonlinear z-Buffer distribution? So maybe the calculation is correct, but the minStep just cannot be handled correctly by floats at some point in the pipeline?
The general question: How do I calculate a z-Shift for gl_Position (clip coordinates) that will create a fixed change in the depth buffer later? How can I make sure that the z-Shift will bring one texture in front of another no matter where in the frustum the quad is placed?
Some material:
OpenGL depth buffer faq
https://www.opengl.org/archives/resources/faq/technical/depthbuffer.htm
Same with better readable formulas (but some typos, be careful)
https://www.opengl.org/wiki/Depth_Buffer_Precision
Calculation from eye coords to z-buffer. Most of that happens already when I multiply the projection matrix.
http://www.sjbaker.org/steve/omniv/love_your_z_buffer.html
Explanation about the elements in the projection matrix that turn into the A and B parts in most depth buffer calculation formulas.
http://www.songho.ca/opengl/gl_projectionmatrix.html
Im having a bit of trouble with getting a depth value that I'm storing in a Float texture (or rather i don't understand the values). Essentially I am creating a deffered renderer, and in one of the passes I am storing the depth in the alpha component of a floating point render target. The code for that shader looks something like this
Define the clip position as a varying
varying vec4 clipPos;
...
In the vertex shader assign the position
clipPos = gl_Position;
Now in the fragment shader I store the depth:
gl_FragColor.w = clipPos.z / clipPos.w;
This by and large works. When I access this render target in any subsequent shaders I can get the depth. I.e something like this:
float depth = depthMap.w;
Am i right to assume that 0.0 is right in front of the camera and 1 is in the distance? Because I am doing some fog calculations based on this but they don't seem to be correct.
fogFactor = smoothstep( fogNear, fogFar, depth );
fogNear and fogFar are uniforms I send to the shader. When the fogNear is set to 0, I would have thought I get a smooth transition of fog from right in front of the camera to its draw distance. However this is what I see:
When I set the fogNear to 0.995, then I get something more like what Im expecting:
Is that correct, it just doesn't seem right to me? (The scale of the geometry is not really small / too large and neither is the camera near and far too large. All the values are pretty reasonable)
There are two issues with your approach:
You assume the depth is in the range of [0,1], buit what you use is clipPos.z / clipPos.w, which is NDC z coord in the range [-1,1]. You might be better of by directly writing the window space z coord to your depth texture, which is in [0,1] and will simply be gl_FragCoord.z.
The more serious issue that you assume a linear depth mapping. However, that is not the case. The NDC and window space z value is not a linear representation of the distance to the camera plane. It is not surprisinng that anything you see in the screenshot is very closely to 1. Typical, fog calculations are done in eye space. However, since you only need the z coord here, you simply could store the clip space w coordinate - since typically, that is just -z_eye (look at the last row of your projection matrix). However, the resulting value will be not in any normailized range, but in [near,far] that you use in your projection matrix - but specifying fog distances in eye space units (which normally are indentical to world space units) is more intuitive anyway.
I want to add fog to a scene. But instead of adding fog to the fragment color based on its distance to the camera, I want to follow a more realistic approach. I want to calculate the distance, the vector from eye to fragment "travels" through a layer of fog.
With fog layer I mean that the fog has a lower limit (z-coordinate, which is up in this case) and a higher limit. I want to calculate the vector from the eye to the fragment and get the part of it which is inside the fog. This part is marked red in the graphic.
The calculation is actually quite simple. However, I would have to do some tests (if then) with the easy approach.
calculate line from vector and camera position;
get line intersection with lower limit;
get line intersection with higher limit;
do some logic stuff to figure out how to handle the intersections;
calculate deltaZ, based on intersections;
scale the vector (vector = deltaZ/vector.z)
fogFactor = length(vector);
This should be quite easy. However, what makes trouble is that I would have to add some logic to figure out how the camera and the fragment is located in relation to the fog. Also I have to be sure that the vector actually has an intersection with the limits. (It would makes trouble when the vectors z-Value is 0)
The problem is that alternations are not the best friend of shaders, at least this is what the internet has told me. ;)
My first question: Is there an better way of solving this problem? (I actually want to stay with my model of fog since this is about problem solving.)
The second question: I think that the calculation should be done from the fragment shader and not the vertex shader since this is nothing which can be interpolated. Am I right with this?
Here is a second graphic of the scenario.
Problem solved. :)
Instead of defining the fog with a lower limit and a higher limit, I define it with a center height and a radius. So the lower limit equals the center minus the radius, the higher limit is the center plus the radius.
With this, I came up with this calculation: (sorry for the bad variable names)
// Position_worldspace is the fragment position in world space
// delta 1 and 2 are differences in the z-axis from the fragment / eye to
// the center height
float delta1 = clamp(position_worldspace.z - fog_centerZ,
-fog_height, fog_height)
float delta2 = clamp(fog_centerZ - cameraPosition_worldspace.z,
-fog_height, fog_height);
float fogFactor z = delta1 + delta2;
vec3 viewVector = position_worldspace - cameraPosition_worldspace;
float fogFactor = length(viewVector * (fogFactorZ / (viewVector ).z));
I guess this is not the fastest way of calculating this but it does the trick.
HOWEVER!
The effect isn't realy butifule because the higher and lwoer limit of the fog are razor sharp. I forgot about this since it doesn't look bad when the eye isn't near those borders. But I think there is an easy solution to this problem. :)
Thanks for the help!