GLSL Convert position from viewspace to worldspace from screen quad depth - opengl

I have a deferred renderer that I have created. It writes the normal and depth values to a floating point texture. From that I can get a specific fragment's position in view space. But I want to get the pixel's position in world space.
I thought that to get the pixel from VS to WS I would have to multiply it by the camera's inverse world matrix. That doesn't seem to be right though...
The depthMap is the depth texture, the w component is the clipPos.z / clipPos.w. (passed down from the vertex shader as clipPos = gl_Position)
Then in my screen quad shader I do this
vec2 texCoord = gl_FragCoord.xy / vec2( viewWidth, viewHeight );
vec2 xy = texCoord * 2.0 - 1.0;
vec4 vertexPositionProjected = vec4( xy, depthMap.w, 1.0 );
vec4 vertexPositionVS = projectionInverseMatrix * vertexPositionProjected;
vertexPositionVS.xyz /= vertexPositionVS.w;
vertexPositionVS.w = 1.0;
// This next line I don't think is correct?
vec3 worldPosition = (camWorldInv * vec4( vertexPositionVS.xyz, 1.0 )).rgb;
The end goal here is to create a fog algorithm that bases the fog calculation both on the distance away from the camera as well as the fragment's height (in world space).

Related

How do I convert gl_FragCoord to a world space point in a fragment shader?

My understanding is that you can convert gl_FragCoord to a point in world coordinates in the fragment shader if you have the inverse of the view projection matrix, the screen width, and the screen height. First, you convert gl_FragCoord.x and gl_FragCoord.y from screen space to normalized device coordinates by dividing by the width and height respectively, then scaling and offsetting them into the range [-1, 1]. Next you transform by the inverse view projection matrix to get a world space point that you can use only if you divide by the w component.
Below is the fragment shader code I have that isn't working. Note inverse_proj is actually set to the inverse view projection matrix:
#version 450
uniform mat4 inverse_proj;
uniform float screen_width;
uniform float screen_height;
out vec4 fragment;
void main()
{
// Convert screen coordinates to normalized device coordinates (NDC)
vec4 ndc = vec4(
(gl_FragCoord.x / screen_width - 0.5) * 2,
(gl_FragCoord.y / screen_height - 0.5) * 2,
0,
1);
// Convert NDC throuch inverse clip coordinates to view coordinates
vec4 clip = inverse_proj * ndc;
vec3 view = (1 / ndc.w * clip).xyz;
// ...
}
First, you convert gl_FragCoord.x and gl_FragCoord.y from screen space to normalized device coordinates
While simultaneously ignoring the fact that NDC space is three-dimensional (as is window space). You also forgot that the transformation from clip-space to NDC space involved a division, which you did not undo. Well, you did kinda try to undo it, but after transforming by the inverse clip transformation.
Undoing the vertex post-processing transformations use all four components of gl_FragCoord (though you could make due with just 3). The first step is undoing the viewport transform, which requires getting access to the parameters given to glDepthRange.
That gives you the NDC coordinate. Then you have to undo the perspective divide. gl_FragCoord.w is given the value 1/clipW. And clipW was the divisor in that operation. So you divide by gl_FragCoord.w to get back into clip space.
From there, you can multiply by the inverse of the projection matrix. Though if you want world-space, the projection matrix you invert must be a world-to-projection, rather than just pure projection (which is normally camera-to-projection).
In-code:
vec4 ndcPos;
ndcPos.xy = ((2.0 * gl_FragCoord.xy) - (2.0 * viewport.xy)) / (viewport.zw) - 1;
ndcPos.z = (2.0 * gl_FragCoord.z - gl_DepthRange.near - gl_DepthRange.far) /
(gl_DepthRange.far - gl_DepthRange.near);
ndcPos.w = 1.0;
vec4 clipPos = ndcPos / gl_FragCoord.w;
vec4 eyePos = invPersMatrix * clipPos;
Where viewport is a uniform containing the four parameters specified by the glViewport function, in the same order as given to that function.
I figured out the problems with my code. First, as Nicol pointed out, glFragCoord.z (depth) needs to be shifted from screen coordinates. Also, there is a mistake with the original code where I wrote 1 / ndc.w * clip instead of clip / clip.w.
As noted by BDL, however, it would be more efficient to pass the world position as a varying to the fragment shader. However, the code below is a short way to achieve the desired result entirely through the fragment shader (e.g. for screen-space programs that don't have a world position per fragment and you want the view vector per fragment).
#version 450
uniform mat4 inverse_view_proj;
uniform float screen_width;
uniform float screen_height;
out vec4 fragment;
void main()
{
// Convert screen coordinates to normalized device coordinates (NDC)
vec4 ndc = vec4(
(gl_FragCoord.x / screen_width - 0.5) * 2.0,
(gl_FragCoord.y / screen_height - 0.5) * 2.0,
(gl_FragCoord.z - 0.5) * 2.0,
1.0);
// Convert NDC throuch inverse clip coordinates to view coordinates
vec4 clip = inverse_view_proj * ndc;
vec3 vertex = (clip / clip.w).xyz;
// ...
}

Getting World Position from Depth Buffer Value

I've been working on a deferred renderer to do lighting with, and it works quite well, albeit using a position buffer in my G-buffer. Lighting is done in world space.
I have tried to implement an algorithm to recreate the world space positions from the depth buffer, and the texture coordinates, albeit with no luck.
My vertex shader is nothing particularly special, but this is the part of my fragment shader in which I (attempt to) calculate the world space position:
// Inverse projection matrix
uniform mat4 projMatrixInv;
// Inverse view matrix
uniform mat4 viewMatrixInv;
// texture position from vertex shader
in vec2 TexCoord;
... other uniforms ...
void main() {
// Recalculate the fragment position from the depth buffer
float Depth = texture(gDepth, TexCoord).x;
vec3 FragWorldPos = WorldPosFromDepth(Depth);
... fun lighting code ...
}
// Linearizes a Z buffer value
float CalcLinearZ(float depth) {
const float zFar = 100.0;
const float zNear = 0.1;
// bias it from [0, 1] to [-1, 1]
float linear = zNear / (zFar - depth * (zFar - zNear)) * zFar;
return (linear * 2.0) - 1.0;
}
// this is supposed to get the world position from the depth buffer
vec3 WorldPosFromDepth(float depth) {
float ViewZ = CalcLinearZ(depth);
// Get clip space
vec4 clipSpacePosition = vec4(TexCoord * 2.0 - 1.0, ViewZ, 1);
// Clip space -> View space
vec4 viewSpacePosition = projMatrixInv * clipSpacePosition;
// Perspective division
viewSpacePosition /= viewSpacePosition.w;
// View space -> World space
vec4 worldSpacePosition = viewMatrixInv * viewSpacePosition;
return worldSpacePosition.xyz;
}
I still have my position buffer, and I sample it to compare it against the calculate position later, so everything should be black:
vec3 actualPosition = texture(gPosition, TexCoord).rgb;
vec3 difference = abs(FragWorldPos - actualPosition);
FragColour = vec4(difference, 0.0);
However, what I get is nowhere near the expected result, and of course, lighting doesn't work:
(Try to ignore the blur around the boxes, I was messing around with something else at the time.)
What could cause these issues, and how could I get the position reconstruction from depth working successfully? Thanks.
You are on the right track, but you have not applied the transformations in the correct order.
A quick recap of what you need to accomplish here might help:
Given Texture Coordinates [0,1] and depth [0,1], calculate clip-space position
Do not linearize the depth buffer
Output: w = 1.0 and x,y,z = [-w,w]
Transform from clip-space to view-space (reverse projection)
Use inverse projection matrix
Perform perspective divide
Transform from view-space to world-space (reverse viewing transform)
Use inverse view matrix
The following changes should accomplish that:
// this is supposed to get the world position from the depth buffer
vec3 WorldPosFromDepth(float depth) {
float z = depth * 2.0 - 1.0;
vec4 clipSpacePosition = vec4(TexCoord * 2.0 - 1.0, z, 1.0);
vec4 viewSpacePosition = projMatrixInv * clipSpacePosition;
// Perspective division
viewSpacePosition /= viewSpacePosition.w;
vec4 worldSpacePosition = viewMatrixInv * viewSpacePosition;
return worldSpacePosition.xyz;
}
I would consider changing the name of CalcViewZ (...) though, that is very much misleading. Consider calling it something more appropriate like CalcLinearZ (...).

SSAO not displaying correct results, mostly no visible occlusion

I'm following the tutorial by John Chapman (http://john-chapman-graphics.blogspot.nl/2013/01/ssao-tutorial.html) to implement SSAO in a deferred renderer. The input buffers to the SSAO shaders are:
World-space positions with linearized depth as w-component.
World-space normal vectors
Noise 4x4 texture
I'll first list the complete shader and then briefly walk through the steps:
#version 330 core
in VS_OUT {
vec2 TexCoords;
} fs_in;
uniform sampler2D texPosDepth;
uniform sampler2D texNormalSpec;
uniform sampler2D texNoise;
uniform vec3 samples[64];
uniform mat4 projection;
uniform mat4 view;
uniform mat3 viewNormal; // transpose(inverse(mat3(view)))
const vec2 noiseScale = vec2(800.0f/4.0f, 600.0f/4.0f);
const float radius = 5.0;
void main( void )
{
float linearDepth = texture(texPosDepth, fs_in.TexCoords).w;
// Fragment's view space position and normal
vec3 fragPos_World = texture(texPosDepth, fs_in.TexCoords).xyz;
vec3 origin = vec3(view * vec4(fragPos_World, 1.0));
vec3 normal = texture(texNormalSpec, fs_in.TexCoords).xyz;
normal = normalize(normal * 2.0 - 1.0);
normal = normalize(viewNormal * normal); // Normal from world to view-space
// Use change-of-basis matrix to reorient sample kernel around origin's normal
vec3 rvec = texture(texNoise, fs_in.TexCoords * noiseScale).xyz;
vec3 tangent = normalize(rvec - normal * dot(rvec, normal));
vec3 bitangent = cross(normal, tangent);
mat3 tbn = mat3(tangent, bitangent, normal);
// Loop through the sample kernel
float occlusion = 0.0;
for(int i = 0; i < 64; ++i)
{
// get sample position
vec3 sample = tbn * samples[i]; // From tangent to view-space
sample = sample * radius + origin;
// project sample position (to sample texture) (to get position on screen/texture)
vec4 offset = vec4(sample, 1.0);
offset = projection * offset;
offset.xy /= offset.w;
offset.xy = offset.xy * 0.5 + 0.5;
// get sample depth
float sampleDepth = texture(texPosDepth, offset.xy).w;
// range check & accumulate
// float rangeCheck = abs(origin.z - sampleDepth) < radius ? 1.0 : 0.0;
occlusion += (sampleDepth <= sample.z ? 1.0 : 0.0);
}
occlusion = 1.0 - (occlusion / 64.0f);
gl_FragColor = vec4(vec3(occlusion), 1.0);
}
The result is however not pleasing. The occlusion buffer is mostly all white and doesn't show any occlusion. However, if I move really close to an object I can see some weird noise-like results as you can see below:
This is obviously not correct. I've done a fair share of debugging and believe all the relevant variables are correctly passed around (they all visualize as colors). I do the calculations in view-space.
I'll briefly walk through the steps (and choices) I've taken in case any of you figure something goes wrong in one of the steps.
view-space positions/normals
John Chapman retrieves the view-space position using a view ray and a linearized depth value. Since I use a deferred renderer that already has the world-space positions per fragment I simply take those and multiply them with the view matrix to get them to view-space.
I take a similar approach for the normal vectors. I take the world-space normal vectors from a buffer texture, transform them to [-1,1] range and multiply them with transpose(inverse(mat3(..))) of view matrix.
The view-space position and normals are visualized as below:
This looks correct to me.
Orient hemisphere around normal
The steps to create the tbn matrix are the same as described in John Chapman's tutorial. I create the noise texture as follows:
std::vector<glm::vec3> ssaoNoise;
for (GLuint i = 0; i < noise_size; i++)
{
glm::vec3 noise(randomFloats(generator) * 2.0 - 1.0, randomFloats(generator) * 2.0 - 1.0, 0.0f);
noise = glm::normalize(noise);
ssaoNoise.push_back(noise);
}
...
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16F, 4, 4, 0, GL_RGB, GL_FLOAT, &ssaoNoise[0]);
I can visualize the noise in the fragment shader so that seems to work.
sample depths
I transform all samples from tangent to view-space (samples are random between [-1,1] on xy axis and [0,1] on z-axis and translate them to fragment's current view-space position (origin).
I then sample from linearized depth buffer (which I visualize below when looking close to an object):
and finally compare sampled depth values to current fragment's depth value and add occlusion values. Note that I do not perform a range-check since I don't believe that is the cause of this behavior and I'd rather keep it as minimal as possible for now.
I don't know what is causing this behavior. I believe it is somewhere in sampling the depth values. As far as I can tell I am working in the right coordinate system, linearized depth values are in view-space as well and all variables are set somewhat properly.

get vertex world position in glsl

How can I get a vec3 with the the world position of a vertex?
let's say I want to get white pixels for positions of a cube at Y 1 in world space and black pixels for 0…
I tried
(vertex shader)
[...]
varying float whiteness;
[...]
vec4 posWorld = gl_ProjectionMatrix * gl_Vertex;
whiteness = clamp(posWorld.y,0.0,1.0);
[...]
(fragment shader)
[...]
varying float whiteness;
[...]
gl_FragColor.rgb = vec3(whiteness);
[...]
But that gives me weird results where the surface still depends on the camera angle and height.
How can I just get the vertex position in world space x,y,z?
Read into how points are transformed from their local space into the coordinates of your screen.
worldMatrix * vertex = worldSpace
viewMatrix * worldSpace = viewSpace
projectionMatrix * viewSpace = screenSpace
You should be passing the World Matrix into the shader and multiplying the vertex by that if you wish to get the position of the vertex.
vec4 posWorld = worldMatrix * gl_Vertex;

OpenGL: Compute eye space coord from window space coord in GLSL?

How do I compute an eye space coordinate from window space (pixel in the frame buffer) coordinates + pixel depth value in GLSL please (gluUnproject in GLSL so to speak)?
Looks to be duplicate of GLSL convert gl_FragCoord.z into eye-space z.
Edit (complete answer):
// input: x_coord, y_coord, samplerDepth
vec2 xy = vec2(x_coord,y_coord); //in [0,1] range
vec4 v_screen = vec4(xy, texture(samplerDepth,xy), 1.0 );
vec4 v_homo = inverse(gl_ProjectionMatrix) * 2.0*(v_screen-vec4(0.5));
vec3 v_eye = v_homo.xyz / v_homo.w; //transfer from homogeneous coordinates
Assuming you've stuck with a fixed pipeline-style model, view and projection, you can just implement exactly the formula given in the gluUnProject man page.
There's no matrix inversion built into GLSL, so ideally you'd so that on the CPU. So you need to supply a uniform of the inverse of your composed modelViewProjection matrix. gl_FragCoord is in window coordinates, so you also need to supply the view dimensions.
So, you'd probably end up with something like (coding extemporaneously):
vec4 unProjectedPosition = invertedModelViewProjection * vec4(
2.0 * (gl_FragCoord.x - view[0]) / view[2] - 1.0,
2.0 * (gl_FragCoord.y - view[1]) / view[3] - 1.0,
2.0 * gl_FragCoord.z - 1.0,
1.0);
If you've implemented your own analogue of the old matrix stack then you're probably fine inverting a matrix. Otherwise, it's possibly a more daunting topic than you had anticipated and you might be better off using MESA's open source implementation (see invert_matrix, the third function in that file), just because it's well tested if nothing else.
Well, a guy on opengl.org has pointed out that the clip space coordinates the projection produces are divided by clipPos.w to compute the normalized device coordinates. When reversing the steps from fragment over ndc to clip space coordinates, you need to reconstruct that w (which happens to be -z from the corresponding view space (camera) coordinate), and multiply the ndc coordinate with that value to compute the proper clip space coordinate (which you can turn into a view space coordinate by multiplying it with the inverse projection matrix).
The following code assumes that you are processing the frame buffer in a post process. When processing it while rendering geometry, you can use gl_FragCoord.z instead of texture2D (sceneDepth, ndcPos.xy).r.
Here is the code:
uniform sampler2D sceneDepth;
uniform mat4 projectionInverse;
uniform vec2 clipPlanes; // zNear, zFar
uniform vec2 windowSize; // window width, height
#define ZNEAR clipPlanes.x
#define ZFAR clipPlanes.y
#define A (ZNEAR + ZFAR)
#define B (ZNEAR - ZFAR)
#define C (2.0 * ZNEAR * ZFAR)
#define D (ndcPos.z * B)
#define ZEYE -(C / (A + D))
void main()
{
vec3 ndcPos;
ndcPos.xy = gl_FragCoord.xy / windowSize;
ndcPos.z = texture2D (sceneDepth, ndcPos.xy).r; // or gl_FragCoord.z
ndcPos -= 0.5;
ndcPos *= 2.0;
vec4 clipPos;
clipPos.w = -ZEYE;
clipPos.xyz = ndcPos * clipPos.w;
vec4 eyePos = projectionInverse * clipPos;
}
Basically this is a GLSL version of gluUnproject.
I just realized that it's unnecessary to do these computations in the fragment shader. You can save a couple operations by doing this on the CPU and multiplying it with the MVP inverse (assuming glDepthRange(0, 1), feel free to edit):
glm::vec4 vp(left, right, width, height);
glm::mat4 viewportMat = glm::translate(
vec3(-2.0 * vp.x / vp.z - 1.0, -2.0 * vp.y / vp.w - 1.0, -1.0))
* glm::scale(glm::vec3(2.0 / vp.z, 2.0 / vp.w, 2.0));
glm::mat4 mvpInv = inverse(mvp);
glm::mat4 vmvpInv = mvpInv * viewportMat;
shader->uniform("vmvpInv", vmvpInv);
In the shader:
vec4 eyePos = vmvpInv * vec4(gl_FragCoord.xyz, 1);
vec3 pos = eyePos.xyz / eyePos.w;
I think all available answers are touching the problem from an aspect, and khronos.org has a Wiki page with a few different cases listed and explained with shader code, so it's worth posting here.
Compute eye space from window space.