I want to draw the depth buffer in the fragment shader, I do this:
Vertex shader:
varying vec4 position_;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
position_ = gl_ModelViewProjectionMatrix * gl_Vertex;
Fragment shader:
float depth = ((position_.z / position_.w) + 1.0) * 0.5;
gl_FragColor = vec4(depth, depth, depth, 1.0);
But all I print is white, what am I doing wrong?
In what space do you want to draw the depth? If you want to draw the window-space depth, you can do this:
gl_FragColor = vec4(gl_FragCoord.z);
However, this will not be particularly useful, since most of the numbers will be very close to 1.0. Only extremely close objects will be visible. This is the nature of the distribution of depth values for a depth buffer using a standard perspective projection.
Or, to put it another way, that's why you're getting white.
If you want these values in a linear space, you will need to do something like the following:
float ndcDepth = ndcPos.z =
(2.0 * gl_FragCoord.z - gl_DepthRange.near - gl_DepthRange.far) /
(gl_DepthRange.far - gl_DepthRange.near);
float clipDepth = ndcDepth / gl_FragCoord.w;
gl_FragColor = vec4((clipDepth * 0.5) + 0.5);
Indeed, the "depth" value of a fragment can be read from it's z value in clip space (that is, after all matrix transformations). That much is correct.
However, your problem is in the division by w.
Division by w is called perspective divide. Yes, it is necessary for perspective projection to work correctly.
However. Division by w in this case "bunches up" all your values (as you have seen), to being very close to 1.0. There is a good reason for this: in a perspective projection, w= (some multiplier) *z. That is, you are dividing the z value (whatever it was computed out to be) by the (some factor of) original z. No wonder you always get values near 1.0. You're almost dividing z by itself.
As a very simple fix for this, try dividing z just by the farPlane, and send that to the fragment shader as depth.
Vertex shader
varying float DEPTH ;
uniform float FARPLANE ; // send this in as a uniform to the shader
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
DEPTH = gl_Position.z / FARPLANE ; // do not divide by w
Fragment shader:
varying float DEPTH ;
// far things appear white, near things black
gl_Color.rgb=vec3(DEPTH,DEPTH,DEPTH) ;
The result is a not-bad, very linear-looking fade.
Related
I'm trying to draw a simple sphere with normal mapping in the fragment shader with GL_POINTS. At present, I simply draw one point on the screen and apply a fragment shader to "spherify" it.
However, I'm having trouble colouring the sphere correctly (or at least I think I am). It seems that I'm calculating the z correctly but when I apply the 'normal' colours to gl_FragColor it just doesn't look quite right (or is this what one would expect from a normal map?). I'm assuming there is some inconsistency between gl_PointCoord and the fragment coord, but I can't quite figure it out.
Vertex shader
precision mediump float;
attribute vec3 position;
void main() {
gl_PointSize = 500.0;
gl_Position = vec4(position.xyz, 1.0);
}
fragment shader
precision mediump float;
void main() {
float x = gl_PointCoord.x * 2.0 - 1.0;
float y = gl_PointCoord.y * 2.0 - 1.0;
float z = sqrt(1.0 - (pow(x, 2.0) + pow(y, 2.0)));
vec3 position = vec3(x, y, z);
float mag = dot(position.xy, position.xy);
if(mag > 1.0) discard;
vec3 normal = normalize(position);
gl_FragColor = vec4(normal, 1.0);
}
Actual output:
Expected output:
The color channels are clamped to the range [0, 1]. (0, 0, 0) is black and (1, 1, 1) is completely white.
Since the normal vector is normalized, its component are in the range [-1, 1].
To get the expected result you have to map the normal vector from the range [-1, 1] to [0, 1]:
vec3 normal_col = normalize(position) * 0.5 + 0.5;
gl_FragColor = vec4(normal_col, 1.0);
If you use the abs value, then a positive and negative value with the same size have the same color representation. The intensity of the color increases with the grad of the value:
vec3 normal_col = abs(normalize(position));
gl_FragColor = vec4(normal_col, 1.0);
First of all, the normal facing the camera [0,0,-1] should be rgb values: [0.5,0.5,1.0]. You have to rescale things to move those negative values to be between 0 and 1.
Second, the normals of a sphere would not change linearly, but in a sine wave. So you need some trigonometry here. It makes sense to me to to start with the perpendicular normal [0,0,-1] and then then rotate that normal by an angle, because that angle is what changing linearly.
As a result of playing around this I came up with this:
http://glslsandbox.com/e#50268.3
which uses some rotation function from here: https://github.com/yuichiroharai/glsl-y-rotate
My understanding is that you can convert gl_FragCoord to a point in world coordinates in the fragment shader if you have the inverse of the view projection matrix, the screen width, and the screen height. First, you convert gl_FragCoord.x and gl_FragCoord.y from screen space to normalized device coordinates by dividing by the width and height respectively, then scaling and offsetting them into the range [-1, 1]. Next you transform by the inverse view projection matrix to get a world space point that you can use only if you divide by the w component.
Below is the fragment shader code I have that isn't working. Note inverse_proj is actually set to the inverse view projection matrix:
#version 450
uniform mat4 inverse_proj;
uniform float screen_width;
uniform float screen_height;
out vec4 fragment;
void main()
{
// Convert screen coordinates to normalized device coordinates (NDC)
vec4 ndc = vec4(
(gl_FragCoord.x / screen_width - 0.5) * 2,
(gl_FragCoord.y / screen_height - 0.5) * 2,
0,
1);
// Convert NDC throuch inverse clip coordinates to view coordinates
vec4 clip = inverse_proj * ndc;
vec3 view = (1 / ndc.w * clip).xyz;
// ...
}
First, you convert gl_FragCoord.x and gl_FragCoord.y from screen space to normalized device coordinates
While simultaneously ignoring the fact that NDC space is three-dimensional (as is window space). You also forgot that the transformation from clip-space to NDC space involved a division, which you did not undo. Well, you did kinda try to undo it, but after transforming by the inverse clip transformation.
Undoing the vertex post-processing transformations use all four components of gl_FragCoord (though you could make due with just 3). The first step is undoing the viewport transform, which requires getting access to the parameters given to glDepthRange.
That gives you the NDC coordinate. Then you have to undo the perspective divide. gl_FragCoord.w is given the value 1/clipW. And clipW was the divisor in that operation. So you divide by gl_FragCoord.w to get back into clip space.
From there, you can multiply by the inverse of the projection matrix. Though if you want world-space, the projection matrix you invert must be a world-to-projection, rather than just pure projection (which is normally camera-to-projection).
In-code:
vec4 ndcPos;
ndcPos.xy = ((2.0 * gl_FragCoord.xy) - (2.0 * viewport.xy)) / (viewport.zw) - 1;
ndcPos.z = (2.0 * gl_FragCoord.z - gl_DepthRange.near - gl_DepthRange.far) /
(gl_DepthRange.far - gl_DepthRange.near);
ndcPos.w = 1.0;
vec4 clipPos = ndcPos / gl_FragCoord.w;
vec4 eyePos = invPersMatrix * clipPos;
Where viewport is a uniform containing the four parameters specified by the glViewport function, in the same order as given to that function.
I figured out the problems with my code. First, as Nicol pointed out, glFragCoord.z (depth) needs to be shifted from screen coordinates. Also, there is a mistake with the original code where I wrote 1 / ndc.w * clip instead of clip / clip.w.
As noted by BDL, however, it would be more efficient to pass the world position as a varying to the fragment shader. However, the code below is a short way to achieve the desired result entirely through the fragment shader (e.g. for screen-space programs that don't have a world position per fragment and you want the view vector per fragment).
#version 450
uniform mat4 inverse_view_proj;
uniform float screen_width;
uniform float screen_height;
out vec4 fragment;
void main()
{
// Convert screen coordinates to normalized device coordinates (NDC)
vec4 ndc = vec4(
(gl_FragCoord.x / screen_width - 0.5) * 2.0,
(gl_FragCoord.y / screen_height - 0.5) * 2.0,
(gl_FragCoord.z - 0.5) * 2.0,
1.0);
// Convert NDC throuch inverse clip coordinates to view coordinates
vec4 clip = inverse_view_proj * ndc;
vec3 vertex = (clip / clip.w).xyz;
// ...
}
I've been working on a deferred renderer to do lighting with, and it works quite well, albeit using a position buffer in my G-buffer. Lighting is done in world space.
I have tried to implement an algorithm to recreate the world space positions from the depth buffer, and the texture coordinates, albeit with no luck.
My vertex shader is nothing particularly special, but this is the part of my fragment shader in which I (attempt to) calculate the world space position:
// Inverse projection matrix
uniform mat4 projMatrixInv;
// Inverse view matrix
uniform mat4 viewMatrixInv;
// texture position from vertex shader
in vec2 TexCoord;
... other uniforms ...
void main() {
// Recalculate the fragment position from the depth buffer
float Depth = texture(gDepth, TexCoord).x;
vec3 FragWorldPos = WorldPosFromDepth(Depth);
... fun lighting code ...
}
// Linearizes a Z buffer value
float CalcLinearZ(float depth) {
const float zFar = 100.0;
const float zNear = 0.1;
// bias it from [0, 1] to [-1, 1]
float linear = zNear / (zFar - depth * (zFar - zNear)) * zFar;
return (linear * 2.0) - 1.0;
}
// this is supposed to get the world position from the depth buffer
vec3 WorldPosFromDepth(float depth) {
float ViewZ = CalcLinearZ(depth);
// Get clip space
vec4 clipSpacePosition = vec4(TexCoord * 2.0 - 1.0, ViewZ, 1);
// Clip space -> View space
vec4 viewSpacePosition = projMatrixInv * clipSpacePosition;
// Perspective division
viewSpacePosition /= viewSpacePosition.w;
// View space -> World space
vec4 worldSpacePosition = viewMatrixInv * viewSpacePosition;
return worldSpacePosition.xyz;
}
I still have my position buffer, and I sample it to compare it against the calculate position later, so everything should be black:
vec3 actualPosition = texture(gPosition, TexCoord).rgb;
vec3 difference = abs(FragWorldPos - actualPosition);
FragColour = vec4(difference, 0.0);
However, what I get is nowhere near the expected result, and of course, lighting doesn't work:
(Try to ignore the blur around the boxes, I was messing around with something else at the time.)
What could cause these issues, and how could I get the position reconstruction from depth working successfully? Thanks.
You are on the right track, but you have not applied the transformations in the correct order.
A quick recap of what you need to accomplish here might help:
Given Texture Coordinates [0,1] and depth [0,1], calculate clip-space position
Do not linearize the depth buffer
Output: w = 1.0 and x,y,z = [-w,w]
Transform from clip-space to view-space (reverse projection)
Use inverse projection matrix
Perform perspective divide
Transform from view-space to world-space (reverse viewing transform)
Use inverse view matrix
The following changes should accomplish that:
// this is supposed to get the world position from the depth buffer
vec3 WorldPosFromDepth(float depth) {
float z = depth * 2.0 - 1.0;
vec4 clipSpacePosition = vec4(TexCoord * 2.0 - 1.0, z, 1.0);
vec4 viewSpacePosition = projMatrixInv * clipSpacePosition;
// Perspective division
viewSpacePosition /= viewSpacePosition.w;
vec4 worldSpacePosition = viewMatrixInv * viewSpacePosition;
return worldSpacePosition.xyz;
}
I would consider changing the name of CalcViewZ (...) though, that is very much misleading. Consider calling it something more appropriate like CalcLinearZ (...).
I try to create a 2 pass effect using FBO in OpenGL.
In the first pass, I write the depth in a color buffer (image 1):
Using the following in its vertex shader:
gl_Position = projection * view * gl_Vertex;
vec4 position = gl_Position/gl_Position.w;
position = position / 2.0 + 0.5;
float temp_depth = position.z;
gl_FrontColor = vec4(temp_depth,temp_depth,temp_depth,1);
In the second pass I try to use the texture from the previous pass and color the scene (image 2):
Here is the code in vertex shader:
vec4 shadow_coord = projection * view * gl_Vertex;
shadow_coord = shadow_coord / shadow_coord.w;
shadow_coord = shadow_coord / 2.0 + 0.5;
gl_FrontColor = texture2D(light_depth_texture, shadow_coord.xy);
The scene is consisted of a quad in the front of a cone. In both cases the fragment shader is gl_FragColor = gl_Color; The view and projection matrices in both cases are exactly the same defined at start. The problems is that there is a deviation in shadow_coord.xy.
As long as the view and projection values are exactly the same, shouldn't I get same result?
What can I do to fix it?
What resolution do you use for the texture you render into? And what kind of filtering? (seems linear, should be nearest). Also try to offset the coordinate you read with like:
// offset should be 0.5 / texture_resolution
gl_FrontColor = texture2D(light_depth_texture, shadow_coord.xy + offset);
And as the other commenters mentioned, 8 bit is not enough to store depth values, consider using a depth texture or a floating point format (like GL_R32F as in ARB_texture_rg).
How do I compute an eye space coordinate from window space (pixel in the frame buffer) coordinates + pixel depth value in GLSL please (gluUnproject in GLSL so to speak)?
Looks to be duplicate of GLSL convert gl_FragCoord.z into eye-space z.
Edit (complete answer):
// input: x_coord, y_coord, samplerDepth
vec2 xy = vec2(x_coord,y_coord); //in [0,1] range
vec4 v_screen = vec4(xy, texture(samplerDepth,xy), 1.0 );
vec4 v_homo = inverse(gl_ProjectionMatrix) * 2.0*(v_screen-vec4(0.5));
vec3 v_eye = v_homo.xyz / v_homo.w; //transfer from homogeneous coordinates
Assuming you've stuck with a fixed pipeline-style model, view and projection, you can just implement exactly the formula given in the gluUnProject man page.
There's no matrix inversion built into GLSL, so ideally you'd so that on the CPU. So you need to supply a uniform of the inverse of your composed modelViewProjection matrix. gl_FragCoord is in window coordinates, so you also need to supply the view dimensions.
So, you'd probably end up with something like (coding extemporaneously):
vec4 unProjectedPosition = invertedModelViewProjection * vec4(
2.0 * (gl_FragCoord.x - view[0]) / view[2] - 1.0,
2.0 * (gl_FragCoord.y - view[1]) / view[3] - 1.0,
2.0 * gl_FragCoord.z - 1.0,
1.0);
If you've implemented your own analogue of the old matrix stack then you're probably fine inverting a matrix. Otherwise, it's possibly a more daunting topic than you had anticipated and you might be better off using MESA's open source implementation (see invert_matrix, the third function in that file), just because it's well tested if nothing else.
Well, a guy on opengl.org has pointed out that the clip space coordinates the projection produces are divided by clipPos.w to compute the normalized device coordinates. When reversing the steps from fragment over ndc to clip space coordinates, you need to reconstruct that w (which happens to be -z from the corresponding view space (camera) coordinate), and multiply the ndc coordinate with that value to compute the proper clip space coordinate (which you can turn into a view space coordinate by multiplying it with the inverse projection matrix).
The following code assumes that you are processing the frame buffer in a post process. When processing it while rendering geometry, you can use gl_FragCoord.z instead of texture2D (sceneDepth, ndcPos.xy).r.
Here is the code:
uniform sampler2D sceneDepth;
uniform mat4 projectionInverse;
uniform vec2 clipPlanes; // zNear, zFar
uniform vec2 windowSize; // window width, height
#define ZNEAR clipPlanes.x
#define ZFAR clipPlanes.y
#define A (ZNEAR + ZFAR)
#define B (ZNEAR - ZFAR)
#define C (2.0 * ZNEAR * ZFAR)
#define D (ndcPos.z * B)
#define ZEYE -(C / (A + D))
void main()
{
vec3 ndcPos;
ndcPos.xy = gl_FragCoord.xy / windowSize;
ndcPos.z = texture2D (sceneDepth, ndcPos.xy).r; // or gl_FragCoord.z
ndcPos -= 0.5;
ndcPos *= 2.0;
vec4 clipPos;
clipPos.w = -ZEYE;
clipPos.xyz = ndcPos * clipPos.w;
vec4 eyePos = projectionInverse * clipPos;
}
Basically this is a GLSL version of gluUnproject.
I just realized that it's unnecessary to do these computations in the fragment shader. You can save a couple operations by doing this on the CPU and multiplying it with the MVP inverse (assuming glDepthRange(0, 1), feel free to edit):
glm::vec4 vp(left, right, width, height);
glm::mat4 viewportMat = glm::translate(
vec3(-2.0 * vp.x / vp.z - 1.0, -2.0 * vp.y / vp.w - 1.0, -1.0))
* glm::scale(glm::vec3(2.0 / vp.z, 2.0 / vp.w, 2.0));
glm::mat4 mvpInv = inverse(mvp);
glm::mat4 vmvpInv = mvpInv * viewportMat;
shader->uniform("vmvpInv", vmvpInv);
In the shader:
vec4 eyePos = vmvpInv * vec4(gl_FragCoord.xyz, 1);
vec3 pos = eyePos.xyz / eyePos.w;
I think all available answers are touching the problem from an aspect, and khronos.org has a Wiki page with a few different cases listed and explained with shader code, so it's worth posting here.
Compute eye space from window space.