convert clippos to ndc position in glsl? - glsl

I'm new to graphics programming, so sorry if I have used the wrong terms.
with clippos I mean:
gl_FragColor.w = gl_Position.z / gl_Position.w
I do this in my depthpass, how can I get the value of gl_FragCoord.z / gl_FragCoord.w in my postprocessing pass from the result of gl_Position.z / gl_Position.w ?

That is kind of a silly thing to do, to be honest. gl_FragCoord.w is equal to 1.0 / gl_Position.w so that you can undo projection. You need to keep that w value around somewhere unless it is known to be constant. In fact, if anything what you really want to write is that w coordinate by itself.
It will not be constant if you are using a perspective projection, then gl_Position.w will generally be -eye.z after projection. That value is mostly historical, in the old fixed-function pipeline the forward part of the Z-axis pointed down the negative direction in eye-space.
In programmable GL you are free to use any convention you want for coordinate spaces before clip-space. However, everything clip-space and beyond has a fixed axial orientation.

Related

Should all vectors be transformed into perspective space when working with them in the fragment shader?

I'm implementing the PHONG shading model in OpenGL. I need the normal, viewer direction, and the light direction for each fragment. A lot of demos pass in these vectors in world coordinates from vertex shader. Maybe it's because there isn't much difference between the normalized world coordinate vectors and the normalized perspective coordinate vectors?
I'm thinking for the "true" PHONG solution, these vectors should be transformed to be perspective coordinate system in vertex shader and then perform the .w divide in fragment shader because they are not the gl_position. Is this thinking correct?
Edit:
From this link seems to suggest OpenGl's varying qualifier requires the original 'Z' coordinate of the fragment to perform correct perspective interpolation. See https://www.opengl.org/wiki/Type_Qualifier_(GLSL)#Interpolation_qualifiers
So the question I'm wondering can OpenGL derive the Z-value from the depth value?
Edit: Yes it can. Getting the true z value from the depth buffer
First, you cannot forgo the division-by-W step. Why? Because it's hard-wired. It happens as part of OpenGL's fixed-functionality. The gl_Position your last vertex processing step generates will have its W component divided into the other three.
Now, you could try to trick your way around that, by sticking 1.0 in the gl_Position's W, and passing it as some unrelated output. But the W component is a crucial part of perspective-correct interpolation. By faking your transforms this way, you lose that.
And that's kinda important. So unless you intend to re-interpolate all of your per-vertex outputs in the FS and perform perspective-correct interpolation, this just isn't going to work.
Second, post-projective space, when using a perspective projection, is a non-linear transformation, relative to world space. This means that parallel lines are no longer parallel. This also means that vector directions don't point at what they used to point at. So your light direction doesn't necessarily point at where your light is.
Oh, and distances are not linear either. So light attenuation no longer makes sense, since the attenuation factors were designed in a space linearly equivalent to world space. And post-projection space is not.
Here's an image to give you an idea of what I'm talking about:
What you see on the left is a rendering in world space. What you see on the right is the same scene as on the left, only viewed in post-projection space.
That is not a reasonable space to do lighting in.

GLSL, change glPosition.z to create a flat change in depth buffer?

I am drawing a stack of decals on a quad. Same geometry, different textures. Z-fighting is the obvious result. I cannot control the rendering order or use glPolygonoffset due to batched rendering. So I adjust depth values inside the vertex shader.
gl_Position = uMVPMatrix * pos;
gl_Position.z += aDepthLayer * uMinStep * gl_Position.w;
gl_Position holds clip coordinates. That means a change in z will move a vertex along its view ray and bring it to the front or push it to the back. For normalized device coordinates the clip coords get divided by gl_Position.w (=-Zclip). As a result the depth buffer does not have linear distribution and has higher resolution towards the near plane. By premultiplying gl_Position.w that should be fixed and I should be able to apply a flat amount (uMinStep) to the NDC.
That minimum step should be something like 1/(2^GL_DEPTH_BITS -1). Or, since NDC space goes from -1.0 to 1.0, it might have to be twice that amount. However it does not work with these values. The minStep is roughly 0.00000006 but it does not bring a texture to the front. Neither when I double that value. If I drop a zero (scale by 10), it works. (Yay, thats something!)
But it does not work evenly along the frustum. A value that brings a texture in front of another while the quad is close to the near plane does not necessarily do the same when the quad is close to the far plane. The same effect happens when I make the frustum deeper. I would expect that behaviour if I was changing eye coordinates, because of the nonlinear z-Buffer distribution. But it seems that premultiplying gl_Position.w is not enough to counter that.
Am I missing some part of the transformations that happen to clip coords? Do I need to use a different formula in general? Do I have to include the depth range [0,1] somehow?
Could the different behaviour along the frustum be a result of nonlinear floating point precision instead of nonlinear z-Buffer distribution? So maybe the calculation is correct, but the minStep just cannot be handled correctly by floats at some point in the pipeline?
The general question: How do I calculate a z-Shift for gl_Position (clip coordinates) that will create a fixed change in the depth buffer later? How can I make sure that the z-Shift will bring one texture in front of another no matter where in the frustum the quad is placed?
Some material:
OpenGL depth buffer faq
https://www.opengl.org/archives/resources/faq/technical/depthbuffer.htm
Same with better readable formulas (but some typos, be careful)
https://www.opengl.org/wiki/Depth_Buffer_Precision
Calculation from eye coords to z-buffer. Most of that happens already when I multiply the projection matrix.
http://www.sjbaker.org/steve/omniv/love_your_z_buffer.html
Explanation about the elements in the projection matrix that turn into the A and B parts in most depth buffer calculation formulas.
http://www.songho.ca/opengl/gl_projectionmatrix.html

GLSL Shader - change 'camera' position

I'm trying to create some kind of 'camera' object with OpenGL. By changing its values, you can zoom in/out, and move the camera around. (imagine a 2d world and you're on top of it). This results in the variables center.x, center.y, and center.z.
attribute vec2 in_Position;
attribute vec4 in_Color;
attribute vec2 in_TexCoords;
uniform vec2 uf_Projection;
uniform vec3 center;
varying vec4 var_Color;
varying vec2 var_TexCoords;
void main() {
var_Color = in_Color;
var_TexCoords = in_TexCoords;
gl_Position = vec4(in_Position.x / uf_Projection.x - center.x,
in_Position.y / -uf_Projection.y + center.y,
0, center.z);
}
I'm using uniform vec3 center to manipulate the camera location. (I'm feeling it should be called an attribute, but I don't know for sure; I only know how to manipulate the uniform values. )
uf_Projection has values of half the screen height and width. This was already the case (forking someones code), and I can only assume it's to make sure the values in gl_Position are normalized?
Entering values for i.e. center.x does change the camera angle correctly. However, it does not match the location at which certain things appear to be rendered.
In addition to the question: how bad is the code?, I'm actually asking these concrete questions:
What is in_Position supposed to be? I've seen several code examples use it, but no-one explains it. It's not explicitly defined either; which values does it take?
What values is gl_Position supposed to take? uf_Projection seems to normalize the values, but when adding values (more than 2000) at center.x, it still works (correctly moved the screen).
Is this the correct way to create a kind of "camera" effect? Or is there a better way? (the idea is that things that aren't on the screen, don't have to get rendered)
The questions you ask can only be answered if one considers the bigger picture. In this case, this means we should have a look at the vertex shader and the typical coordinate transformations which are used for rendering.
The purpose of the vertex shader is to calculate a clip space position for each vertex of the object(s) to be drawn.
In this context, an object is just a sequence of geometrical primitives like points, lines or triangles, each specified by some vertices.
These verties typically specify some position with respect to some completely user-defined coorinate frame of reference. The space those vertex positions are defined in is typically called the object space.
Now the vertex shader's job is to transform from object space to clip space using some mathematical or algorithmical way. Typically, these transformation rules also implicitely or explicitetely consist of some "virtual camera", so that the object is transformd as if observed by said camera.
However, what rules are used, and how they are described, and which inputs are needed is completely free.
What is in_Position supposed to be? I've seen several code examples use it, but no-one explains it. It's not explicitly defined either; which values does it take?
So in_Position in your case is just some attribute (meaning it is a value which is specified per vertex). The "meaning" of this attribute depends solely on how it is used. Since you are using it as input for some coordinate transformation, we could interpret it as meaning the object space position of the vertex. In your case, that is a 2D object space.The values it "takes" are completely up to you.
What values is gl_Position supposed to take? uf_Projection seems to normalize the values, but when adding values (more than 2000) at center.x, it still works (correctly moved the screen).
gl_Position is the clip space position of the vertex. Now clip space is a bit hard to describe. The "normalization" you see here has to do with the fact that there is another space, the normalized device coords (NDC). And in the GL, the convention for NDC is such that the viewing volume is represented by the -1 <= x,y,z <=1 cube in NDC.
So if x_ndc is -1, the object will appear at the left border of your viewport, x=1 at the right borde, y=-1 at bottom border and so on. You also have clipping at z, so object which are too far away or are too near of the hypothetical camera position will also not be visible. (Note that the near clipping plane will also exclude everything which is behind the observer.)
The rule to transform from clip space to NDC is by dividing the clip space x,y and z vlaues by the clip space w value.
The rationale for this is that clip space represents a so called projective space, and the clip space coordinates are homogenuous coordinates. It would be far to much to explain the theory behind this in an StackOverflow article.
But what this means is that by setting gl_Position.w to center.z, the GL will later effectively divide gl_Position.xyz by center.z to reach NDC coordinates. Such a division basiaclly creates the perspective effect that points which are farther away appear closer together.
It is unclear to me if this is exactly what you want. Your current solution has the effect that increasing center.z will increase the object space range that is mapped to the viewing volume, so it does give a zoom effect. Let's consider the x coordinate:
x_ndc = (in_Position.x / uf_Projection.x - center.x) / center.z
= in_Position.x / (uf_Projection.x * center.z) - center.x / center.z
To put it the other way around, the object space x range you can see on the screen will be the inverse transformation applied to x_ndc=-1 and x_ndc=1:
x_obj = (x_ndc + center.x/center.z) * (uf_Projection.x * center.z)
= x_ndc * uf_Projection.x * center.z + center.x * uf_Projection.x
= uf_Projection.x * (x_ndc * center.z + center.x);
So basically, the visibile object space range will be center.xy +- uf_Projection.xy * center.z.
Is this the correct way to create a kind of "camera" effect? Or is there a better way? (the idea is that things that aren't on the screen, don't have to get rendered)
Conceptually, the steps are right. Usually, one uses transformation matrices to define the necessary steps. But in your case, directly applying the transformations as some multiplcations and additions is even more efficient (but less flexible).
I'm using uniform vec3 center to manipulate the camera location. (I'm feeling it should be called an attribute, but I don't know for sure.
Actually, using a uniform for this is the right thing to do. Attributes are for values which can change per vertex. Uniforms are for values which are constant during the draw call (hence, are "uniform" for all shader invocations they are accesed by). Your camera specification should be the same for each vertex you are processing. Only the vertex position does vary between vertices, so that each vertex will end up at a different point with respect to some fixed camera location (and parameters).

Getting depth from Float texture in post process

Im having a bit of trouble with getting a depth value that I'm storing in a Float texture (or rather i don't understand the values). Essentially I am creating a deffered renderer, and in one of the passes I am storing the depth in the alpha component of a floating point render target. The code for that shader looks something like this
Define the clip position as a varying
varying vec4 clipPos;
...
In the vertex shader assign the position
clipPos = gl_Position;
Now in the fragment shader I store the depth:
gl_FragColor.w = clipPos.z / clipPos.w;
This by and large works. When I access this render target in any subsequent shaders I can get the depth. I.e something like this:
float depth = depthMap.w;
Am i right to assume that 0.0 is right in front of the camera and 1 is in the distance? Because I am doing some fog calculations based on this but they don't seem to be correct.
fogFactor = smoothstep( fogNear, fogFar, depth );
fogNear and fogFar are uniforms I send to the shader. When the fogNear is set to 0, I would have thought I get a smooth transition of fog from right in front of the camera to its draw distance. However this is what I see:
When I set the fogNear to 0.995, then I get something more like what Im expecting:
Is that correct, it just doesn't seem right to me? (The scale of the geometry is not really small / too large and neither is the camera near and far too large. All the values are pretty reasonable)
There are two issues with your approach:
You assume the depth is in the range of [0,1], buit what you use is clipPos.z / clipPos.w, which is NDC z coord in the range [-1,1]. You might be better of by directly writing the window space z coord to your depth texture, which is in [0,1] and will simply be gl_FragCoord.z.
The more serious issue that you assume a linear depth mapping. However, that is not the case. The NDC and window space z value is not a linear representation of the distance to the camera plane. It is not surprisinng that anything you see in the screenshot is very closely to 1. Typical, fog calculations are done in eye space. However, since you only need the z coord here, you simply could store the clip space w coordinate - since typically, that is just -z_eye (look at the last row of your projection matrix). However, the resulting value will be not in any normailized range, but in [near,far] that you use in your projection matrix - but specifying fog distances in eye space units (which normally are indentical to world space units) is more intuitive anyway.

Logarithmic Depth Buffer OpenGL

I've managed to implement a logarithmic depth buffer in OpenGL, mainly courtesy of articles from Outerra (You can read them here, here, and here). However, I'm having some issues, and I'm not sure if these issues are inherent to using a logarithmic depth buffer or if there's some workaround I can't think of.
Just to start off, this is how I calculate logarithmic depth within the vertex shader:
gl_Position = MVP * vec4(inPosition, 1.0);
gl_Position.z = log2(max(ZNEAR, 1.0 + gl_Position.w)) * FCOEF - 1.0;
flogz = 1.0 + gl_Position.w;
And this is how I fix depth values in the fragment shader:
gl_FragDepth = log2(flogz) * HALF_FCOEF;
Where ZNEAR = 0.0001, ZFAR = 1000000.0, FCOEF = 2.0 / log2(ZFAR + 1.0), and HALF_FCOEF = 0.5 * FCOEF. C, in my case, is 1.0, to simplify my code and reduce calculations.
For starters, I'm extremely pleased with the level of precision I get. With normal depth buffering (znear = 0.1, zfar = 1000.0), I get quite a bit of z-fighting towards the edge of the view distance. Right now, with my MUCH further znear:zfar, I've put a second ground plane 0.01 units below the first, and I cannot find any z-fighting, no matter how far I zoom the camera out (I get a little z-fighting when it's only 0.0001 (0.1 mm) away, but meh).
I do have some issues/concerns, however.
1) I get more near-plane clipping than I did with my normal depth buffer, and it looks ugly. It happens in cases where, logically, it really shouldn't. Here are a couple of screenshots of what I mean:
Clipping the ground.
Clipping a mesh.
Both of these cases are things that I did not experience with my normal depth buffer, and I'd rather not see (especially the former). EDIT: Problem 1 is officially solved by using glEnable(GL_DEPTH_CLAMP).
2) In order to get this to work, I need to write to gl_FragDepth. I tried not doing so, but the results were unacceptable. Writing to gl_FragDepth means that my graphics card can't do early z optimizations. This will inevitably drive me up the wall and so I want to fix it as soon as I can.
3) I need to be able to retrieve the value stored in the depth buffer (I already have a framebuffer and texture for this), and then convert it to a linear view-space co-ordinate. I don't really know where to start with this, the way I did it before involved the inverse projection matrix but I can't really do that here. Any advice?
Near plane clipping happens independently from depth testing, but by clipping against the cli space volume. In modern OpenGL one can use depth clamping to make things look nice again. See http://opengl.datenwolf.net/gltut/html/Positioning/Tut05%20Depth%20Clamping.html#d0e5707
1) In the equation you used:
gl_Position.z = log2(max(ZNEAR, 1.0 + gl_Position.w)) * FCOEF - 1.0;
There should not be ZNEAR, because that's unrelated to it. The constant there is just to avoid log2 argument to be zero, e.g you can use 1e-6 there.
But otherwise the depth clamping will solve the issue.
2) You can avoid using gl_FragDepth only with adaptive tesselation, that keeps the interpolation error in bounds. For example, in Outerra the terrain is adaptively tesselated, and thus it never happens that there would be a visible error on the terrain. But the fragment depth write is needed on the objects when zooming in close, as the long screen-space triangles will have the discrepancy between linearly interpolated value and the correct logarithmic value quite high.
Note that latest AMD drivers now support the NV_depth_buffer_float extension, so it's now possible to use the Reversed floating-point depth buffer setup instead. It's not yet supported on Intel GPUs though, as far as I know.
3) The conversion to the view space depth is described here: https://stackoverflow.com/a/18187212/2435594
Maybe a little late for answering.
In any case, to reconstruct Z usign the log2 version:
realDepth = pow(2,(LogDepthValue + 1.0)/Fcoef) - 1.0;