I've managed to implement a logarithmic depth buffer in OpenGL, mainly courtesy of articles from Outerra (You can read them here, here, and here). However, I'm having some issues, and I'm not sure if these issues are inherent to using a logarithmic depth buffer or if there's some workaround I can't think of.
Just to start off, this is how I calculate logarithmic depth within the vertex shader:
gl_Position = MVP * vec4(inPosition, 1.0);
gl_Position.z = log2(max(ZNEAR, 1.0 + gl_Position.w)) * FCOEF - 1.0;
flogz = 1.0 + gl_Position.w;
And this is how I fix depth values in the fragment shader:
gl_FragDepth = log2(flogz) * HALF_FCOEF;
Where ZNEAR = 0.0001, ZFAR = 1000000.0, FCOEF = 2.0 / log2(ZFAR + 1.0), and HALF_FCOEF = 0.5 * FCOEF. C, in my case, is 1.0, to simplify my code and reduce calculations.
For starters, I'm extremely pleased with the level of precision I get. With normal depth buffering (znear = 0.1, zfar = 1000.0), I get quite a bit of z-fighting towards the edge of the view distance. Right now, with my MUCH further znear:zfar, I've put a second ground plane 0.01 units below the first, and I cannot find any z-fighting, no matter how far I zoom the camera out (I get a little z-fighting when it's only 0.0001 (0.1 mm) away, but meh).
I do have some issues/concerns, however.
1) I get more near-plane clipping than I did with my normal depth buffer, and it looks ugly. It happens in cases where, logically, it really shouldn't. Here are a couple of screenshots of what I mean:
Clipping the ground.
Clipping a mesh.
Both of these cases are things that I did not experience with my normal depth buffer, and I'd rather not see (especially the former). EDIT: Problem 1 is officially solved by using glEnable(GL_DEPTH_CLAMP).
2) In order to get this to work, I need to write to gl_FragDepth. I tried not doing so, but the results were unacceptable. Writing to gl_FragDepth means that my graphics card can't do early z optimizations. This will inevitably drive me up the wall and so I want to fix it as soon as I can.
3) I need to be able to retrieve the value stored in the depth buffer (I already have a framebuffer and texture for this), and then convert it to a linear view-space co-ordinate. I don't really know where to start with this, the way I did it before involved the inverse projection matrix but I can't really do that here. Any advice?
Near plane clipping happens independently from depth testing, but by clipping against the cli space volume. In modern OpenGL one can use depth clamping to make things look nice again. See http://opengl.datenwolf.net/gltut/html/Positioning/Tut05%20Depth%20Clamping.html#d0e5707
1) In the equation you used:
gl_Position.z = log2(max(ZNEAR, 1.0 + gl_Position.w)) * FCOEF - 1.0;
There should not be ZNEAR, because that's unrelated to it. The constant there is just to avoid log2 argument to be zero, e.g you can use 1e-6 there.
But otherwise the depth clamping will solve the issue.
2) You can avoid using gl_FragDepth only with adaptive tesselation, that keeps the interpolation error in bounds. For example, in Outerra the terrain is adaptively tesselated, and thus it never happens that there would be a visible error on the terrain. But the fragment depth write is needed on the objects when zooming in close, as the long screen-space triangles will have the discrepancy between linearly interpolated value and the correct logarithmic value quite high.
Note that latest AMD drivers now support the NV_depth_buffer_float extension, so it's now possible to use the Reversed floating-point depth buffer setup instead. It's not yet supported on Intel GPUs though, as far as I know.
3) The conversion to the view space depth is described here: https://stackoverflow.com/a/18187212/2435594
Maybe a little late for answering.
In any case, to reconstruct Z usign the log2 version:
realDepth = pow(2,(LogDepthValue + 1.0)/Fcoef) - 1.0;
Related
I created a fragment shader that tiles a texture (it is actually a texture array, as you will see in the code). However, I get very ugly 1 pixel borders around each of the tiles. I know this can be caused by discontinuity of the function, causing weird stuff to happen while mipmapping, but I have mipmapping disabled (although I am using antialiasing). Here is my code:
vec2 tiledCoords = fract(pass_textureCoords * 256.0) / 4.0; // pass_textureCoords are the texture coordinates
out_Color = texture(testArray, vec3(tiledCoords.x, tiledCoords.y, 0)); // testArray is the sampler2DArray
I thought I'd have to give a bit of tolerance for rounding errors, but I can't seem to figure out where to put that. How can I get rid of these ugly borders? Thanks in advance!
Here is a screenshot:
I'm not exactly sure what you are trying to do with the calculations in your code. The first line doesn't seem to contribute to the result at all (is overwritten in the second line). Since you didn't set filter modes or wrap modes, you'll have the default settings enabled which is linear filtering and repeat mode.
It seems that you are using tiledCoords in the range of [0, 0.25] and try to wrap from 0.25 to 0. Linear filtering is not going to work automatically in this case since there is no automatic wrap mode for the 0.25 -> 0 transition. The black lines are cause from the linear interpolation in the range [0, 1/(2 * num_pixels)] because this area interpolates linearly between the 0th texel and "-1st" texel (which is due to wrapping mode the last texel of your texture). Same thing happens for the range [0.25 - 1/(2 * num_pixels), 0.25] where you interpolate the pixel directly left of texture coordinate 0.25 with the texel right of it.
There is no way to get hardware linear interpolation for your scenario. You can either use nearest neighbor sampling or calculate linear interpolate manually.
after having implemented shadows for spotlight it appears that the bias computaion make the shadow disappear when my spotlight is too far from objects.
I have been trying to solve this problem for two days and I use Renderdoc to debug my renderer so all data are correct inside the shader.
My Case:
I use a 32 bits depth buffer
I have two cubes, one behind the other (and bigger to see the shadow of its neighboor), and a light looking toward cubes, they are aligned along the z-axis.
I used the following formula found on a tutorial to calculate the bias:
float bias = max(max_bias * (1.0 - dot(normal, lightDir)), min_bias);
And I perform the following comparison:
return (fragment_depth - shadow_texture_depth - bias > 0.0) ? 0.0 : 1.0;
However the more my spotlight is far from objects, the more depth value of the closest cube is close to the depth of the farest cube (difference of 10-3 and it decrease with distance from light).
Everything is working, the perspective made its job.
But the bias calculation doesn't take the distance from fragment to light into account, then if my objects and my light are aligned, normal and lightDir don't change, therefore the bias don't change either : there is no more shadow on my farest cube because the bias doesn't suit anymore.
I have searched on many websites and books (all game programming gems), but I didn't find useful formula.
Here I show you two cases:
Here you have two pair of screenshot the colour result from the camera point of view and the shadowmap from the light point of view.
light position (0, 0, 0), everything works
light position (0, 0, 1.5), doesn't works
Does anybody have a formula or an idea to help me ?
Did I misunderstand something ?
Thanks for reading.
You bias the difference which is in post projective space.
The post projective space is non linear as the depth buffer is logarithmic. So you cannot just offset this difference with this bias which is in "world unit".
If you want to make it work , you have to reconstruct your sampling position with this normal offset.
Or transform your depthes in world space using the inverted projection.
Hope it can help you !
I want to add fog to a scene. But instead of adding fog to the fragment color based on its distance to the camera, I want to follow a more realistic approach. I want to calculate the distance, the vector from eye to fragment "travels" through a layer of fog.
With fog layer I mean that the fog has a lower limit (z-coordinate, which is up in this case) and a higher limit. I want to calculate the vector from the eye to the fragment and get the part of it which is inside the fog. This part is marked red in the graphic.
The calculation is actually quite simple. However, I would have to do some tests (if then) with the easy approach.
calculate line from vector and camera position;
get line intersection with lower limit;
get line intersection with higher limit;
do some logic stuff to figure out how to handle the intersections;
calculate deltaZ, based on intersections;
scale the vector (vector = deltaZ/vector.z)
fogFactor = length(vector);
This should be quite easy. However, what makes trouble is that I would have to add some logic to figure out how the camera and the fragment is located in relation to the fog. Also I have to be sure that the vector actually has an intersection with the limits. (It would makes trouble when the vectors z-Value is 0)
The problem is that alternations are not the best friend of shaders, at least this is what the internet has told me. ;)
My first question: Is there an better way of solving this problem? (I actually want to stay with my model of fog since this is about problem solving.)
The second question: I think that the calculation should be done from the fragment shader and not the vertex shader since this is nothing which can be interpolated. Am I right with this?
Here is a second graphic of the scenario.
Problem solved. :)
Instead of defining the fog with a lower limit and a higher limit, I define it with a center height and a radius. So the lower limit equals the center minus the radius, the higher limit is the center plus the radius.
With this, I came up with this calculation: (sorry for the bad variable names)
// Position_worldspace is the fragment position in world space
// delta 1 and 2 are differences in the z-axis from the fragment / eye to
// the center height
float delta1 = clamp(position_worldspace.z - fog_centerZ,
-fog_height, fog_height)
float delta2 = clamp(fog_centerZ - cameraPosition_worldspace.z,
-fog_height, fog_height);
float fogFactor z = delta1 + delta2;
vec3 viewVector = position_worldspace - cameraPosition_worldspace;
float fogFactor = length(viewVector * (fogFactorZ / (viewVector ).z));
I guess this is not the fastest way of calculating this but it does the trick.
HOWEVER!
The effect isn't realy butifule because the higher and lwoer limit of the fog are razor sharp. I forgot about this since it doesn't look bad when the eye isn't near those borders. But I think there is an easy solution to this problem. :)
Thanks for the help!
I've got an issue with changing the specular power component in my opengl 4.3 shader. What happens is the specular is working fine when I use a shininess value of between 0-10, however, as the value is increased to make the material more shiny, the lighting decreases in intensity. Here is my code:
//Direct Lighting
vec3 L = normalize(worldPos.xyz - lightPos.xyz);
vec3 D = normalize(lightDir.xyz);
float dist = length(-L);
float I = max(dot(N.xyz, -L), 0.0);
vec3 h = normalize(-L.xyz + E);
float specLighting = 0.0;
specLighting = pow(max(dot(N.xyz, h), 0.0),50.0);
fColor.xyz = vec3(specLighting);
So if increase the shininess value to something like 50, there is no lighting at all. By the way, in this code, I am only displaying specular to debug the problem.
[EDIT]
Sorry for the lack of detail in the explanation, I have attached some screenshots of the results of changing the specular shininess value from 1 to 7. As you can see, as the specular highlight reduces in size (which is what I want), the lighting also fades (which is not what I want). It gets to the point where after about 10, it becomes completely black.
By the way, I am doing this entirely in the pixel/fragment shader.
I have also added a screenshot from my directx 11 test engine using the exact same code for specular lighting but with a shininess factor of 100.
Directx 11:
If you want to maintain a minimal lighting-based illumination, you should add a non-specular compenent. The specular component is typically used to add highlights to a material, not as the sole contributor.
Anyway, the darkening you report is a natural result of increasing the exponent. Think about it: because the vectors are pre-normalized, dot(N.xyz, h) is no more than 1.0. Raising a number between 0 and 1 to a high power will naturally tend to make the result very small very quickly... which is exactly what you want for a sharp specular highlight!
Of course, reducing the size of the highlight will reduce the average specular reflection (unless you made the maximum specular value much brighter, of course...). But, very few actual objects have only specular reflection. And, if an object did have only specular reflection, it would look very dark except for the specular spots.
Another way to look at it: your formula gives you a specular brightness whose maximum value is 1.0 (which is in some ways practically convenient for conventional, low-dynamic range computer graphics where each color channel maxes out at 1.0). However, in the real world, a shinier surface will typically cause the specular highlights to get brighter as well as smaller, such that the average specular brightness stays the same. It is the contrast between these to cases which makes the situation confusing. For practical purposes the formula is "working as designed -- will not fix"; typically, the artist will simply adjust the specular exponent and brightness until he gets the appearance he wants.
Thanks for all your help guys,
It turned out that it was just a silly mistake in the vertex shader:
instead of:
fNorm = vec4(mat3(worldSpace)*vNorm.xyz,0);
I had just:
fNorm = vNorm;
I originally wrote this answer before the question was updated with enough information. So it's obviously not the right answer in this case, but may apply to someone with a similar problem...
One possible explanation as to why specular lighting would decrease in intensity with increasing power, is if you are calculating it at a vertex level, rather than per-pixel. If the highlight point happens to fall in the middle of a polygon, rather than right at a vertex, then as it decreases in size with increasing shininess, the vertex contributions will drop off rapidly, and the highlight will disappear. Really for good specular (and really for good lighting generally), you need to calculate per-pixel, and only interpolate things that actually vary smoothly across the polygon, such as position or normal.
If you increase the "shininess" you're material will be less shiny.
I'm repeating a texture in the vertex shader (for storage, not for repeating at the spot). Is this the right way? I seem to lose precision somehwere.
varying vec2 texcoordC;
texcoordC = gl_MultiTexCoord0.xy;
texcoordC *= 10.0;
texcoordC.x = mod(texcoordC.x, 1.0);
texcoordC.y = mod(texcoordC.y, 1.0);
ADDED: I then save (storage) the texcoord in the color, print it to the texture and later use that texture again. When I retrieve the color from the texture, I find the texcoords and use them to apply a texture in postprocess. There's a reason I want it this way, that I won't go into. I get it that the texcoords will be limited by the color's precision, that is alright as my texture is 256 in width and height.
I know normally I would set the texcoords with glTexcoord2f to higher than 1.0 to repeat (and using GL_REPEAT), but I am using a modelloader which I am to lazy to edit, as I think it is not necessary/not the easiest way.
There are (at least) two ways in which this could go wrong:
Firstly yes, you will lose precision. You are essentially taking the fractional part of a floating point number, after scaling it up. This essentially throws some of the number away.
Secondly, this probably won't work anyway, not for most typical uses. You are trying to tile a texture per-vertex, but the texture is interpolated across a polygon. So this technique could tile the texture differently on different vertices of the same polygon, resulting in a bit of a mess.
i.e.
If vertex1 has a U of 1.5 (after scaling), and vertex2 has a U of 2.2, then you expect the interpolation to give increasing values between those points, with the half-way point having a U of 1.85.
If you take the modulo at each vertex, you will have a U of 0.5, and a U of 0.2 respectively, resulting in a decreasing U, and a half-way point with a U of 0.35...
Textures can be tiled just be enabling tiling on the texture/sampler, and using coordinates outside the range 0->1. If you really want to increase sampling accuracy and have a large amount of tiling, you need to wrap the UV coordinates uniformly across whole polygons, rather than per-vertex. i.e. do it in your data, not in the vertex shader.
For your case, where you're trying to output the UV coordinates into a buffer for some later purpose, you could clamp/wrap the UVs in the pixel shader. So multiply up the UV in the vertex shader, interpolate it across the polygon correctly, and then apply the modulo only when writing to the buffer.
However I still think you'll have precision issues as you're losing all the sub-pixel information. Whether or not that's a problem for the technique you're using, I don't know.