GLSL convert gl_FragCoord.z into eye-space z - glsl

This is a simple question, and I'm sick of searching the web for the right equation.
The main problem is that everyone suggests doing something like this VS:
varying float depth;
depth = ( gl_ModelViewMatrix * gl_Vertex );
But I can't, because the depth is stored in a texture.
So anyway, I now the depth value, and the Projection matrix used to create it from the eye-space coords.
If you don't quite understand, tell me and I'll try to word it better.
Thanks in advance. :)

If you extract a depth value from the texture - it's in range [0,1]. First you'll need to scale it into [-1,1] range and then apply the inverse projection to get the model-view depth:
vec2 xy = vec2(x_coord,y_coord); //in [0,1] range
vec4 v_screen = vec4(xy, texture(samplerDepth,xy), 1.0 );
vec4 v_view = inverse(gl_ProjectionMatrix) * (2.0*(read_depth-vec3(0.5)));
float view_depth = v_view.z / v_view.w; //transfer from homogeneous coordinates

Related

OpenGL Normal Mapping

I'm trying to implement Normal Mapping, using a simple cube that i created. I followed this tutorial https://learnopengl.com/Advanced-Lighting/Normal-Mapping but i can't really get how normal mapping should be done when drawing 3d objects, since the tutorial is using a 2d object.
In particular, my cube seems almost correctly lighted but there's something i think it's not working how it should be. I'm using a geometry shader that will output green vector normals and red vector tangents, to help me out. Here i post three screenshot of my work.
Directly lighted
Side lighted
Here i actually tried calculating my normals and tangents in a different way. (quite wrong)
In the first image i calculate my cube normals and tangents one face at a time. This seems to work for the face, but if i rotate my cube i think the lighting on the adiacent face is wrong. As you can see in the second image, it's not totally absent.
In the third image, i tried summing all normals and tangents per vertex, as i think it should be done, but the result seems quite wrong, since there is too little lighting.
In the end, my question is how i should calculate normals and tangents.
Should i consider per face calculations or sum vectors per vertex across all relative faces, or else?
EDIT --
I'm passing normal and tangent to the vertex shader and setting up my TBN matrix. But as you can see in the first image, drawing face by face my cube, it seems that those faces adjacent to the one i'm looking directly (that is well lighted) are not correctly lighted and i don't know why. I thought that i wasn't correctly calculating my 'per face' normal and tangent. I thought that calculating some normal and tangent that takes count of the object in general, could be the right way.
If it's right to calculate normal and tangent as visible in the second image (green normal, red tangent) to set up the TBN matrix, why does the right face seems not well lighted?
EDIT 2 --
Vertex shader:
void main(){
texture_coordinates = textcoord;
fragment_position = vec3(model * vec4(position,1.0));
mat3 normalMatrix = transpose(inverse(mat3(model)));
vec3 T = normalize(normalMatrix * tangent);
vec3 N = normalize(normalMatrix * normal);
T = normalize(T - dot(T, N) * N);
vec3 B = cross(N, T);
mat3 TBN = transpose(mat3(T,B,N));
view_position = TBN * viewPos; // camera position
light_position = TBN * lightPos; // light position
fragment_position = TBN * fragment_position;
gl_Position = projection * view * model * vec4(position,1.0);
}
In the VS i set up my TBN matrix and i transform all light, fragment and view vectors to tangent space; doing so i won't have to do any other calculation in the fragment shader.
Fragment shader:
void main() {
vec3 Normal = texture(TextSamplerNormals,texture_coordinates).rgb; // extract normal
Normal = normalize(Normal * 2.0 - 1.0); // correct range
material_color = texture2D(TextSampler,texture_coordinates.st); // diffuse map
vec3 I_amb = AmbientLight.color * AmbientLight.intensity;
vec3 lightDir = normalize(light_position - fragment_position);
vec3 I_dif = vec3(0,0,0);
float DiffusiveFactor = max(dot(lightDir,Normal),0.0);
vec3 I_spe = vec3(0,0,0);
float SpecularFactor = 0.0;
if (DiffusiveFactor>0.0) {
I_dif = DiffusiveLight.color * DiffusiveLight.intensity * DiffusiveFactor;
vec3 vertex_to_eye = normalize(view_position - fragment_position);
vec3 light_reflect = reflect(-lightDir,Normal);
light_reflect = normalize(light_reflect);
SpecularFactor = pow(max(dot(vertex_to_eye,light_reflect),0.0),SpecularLight.power);
if (SpecularFactor>0.0) {
I_spe = DiffusiveLight.color * SpecularLight.intensity * SpecularFactor;
}
}
color = vec4(material_color.rgb * (I_amb + I_dif + I_spe),material_color.a);
}
Handling discontinuity vs continuity
You are thinking about this the wrong way.
Depending on the use case your normal map may be continous or discontinous. For example in your cube, imagine if each face had a different surface type, then the normals would be different depending on which face you are currently in.
Which normal you use is determined by the texture itself and not by any blending in the fragment.
The actual algorithm is
Load rgb values of normal
Convert to -1 to 1 range
Rotate by the model matrix
Use new value in shading calculations
If you want continous normals, then you need to make sure that the charts in the texture space that you use obey that the limits of the texture coordinates agree.
Mathematically that means that if U and V are regions of R^2 that map to the normal field N of your Shape then if the function of the mapping is f it should be that:
If lim S(x_1, x_2) = lim S(y_1, y_2) where {x1,x2} \subset U and {y_1, y_2} \subset V then lim f(x_1, x_2) = lim f(y_1, y_2).
In plain English, if the cooridnates in your chart map to positions that are close in the shape, then the normals they map to should also be close in the normal space.
TL;DR do not belnd in the fragment. This is something that should be done by the normal map itself when its baked, not'by you when rendering.
Handling the tangent space
You have 2 options. Option n1, you pass the tangent T and the normal N to the shader. In which case the binormal B is T X N and the basis {T, N, B} gives you the true space where normals need to be expressed.
Assume that in tangent space, x is side, y is forward z is up. Your transformed normal becomes (xB, yT, zN).
If you do not pass the tangent, you must first create a random vector that is orthogonal to the normal, then use this as the tangent.
(Note N is the model normal, where (x,y,z) is the normal map normal)

Omnidirectional Lighting in OpenGL/GLSL 4.1

I've gotten shadows working properly for my Directional Lights, but I'm a little stumped when it comes to Point Lights. My idea is to use a cube map to render the depth from all six sides surrounding the light. So far, that's all working and good. I have verified this step by rendering each face of my cube to a 2D image, and it appears to be correct.
Now I'm trying to get the shadows to show up in the world. To do so, I am using GLSL's samplerCubeShadow data type. With it, I do:
vec3 lightToFrag = light.position - fragPos
float lenLightToFrag = length(lightToFrag)
vec3 normLightToFrag = normalize(lightToFrag)
float shadow = texture(depthTexture, vec4(normLightToFrag, lightToFrag))
I've tried a number of configurations, and this always renders my scene in black. Any ideas? My fragPos is just the model matrix times the vertex position. Should I be applying the light's model-view matrix to it? Or, similarly, should I be applying the world's model-view matrix to the light? Any feedback is really appreciated!
Assuming you are storing depth values in cubemap;
AFAIK cubemap is an AABB in world space, so you need to do calculations in world space. In your case light.position and fragPos must be in world space, or provide alternative variables/members if you use these names in view space in somewhere else e.g. per-fragment light calculations
Also you need to convert lightToFrag to depth value before pass to texture.
This answer shows how to convert lightToFrag to depth value: Omnidirectional shadow mapping with depth cubemap
Here my implementation (I removed #ifdef SHAD_CUBE because others use same name):
uniform samplerCubeShadow uShadMap;
uniform vec2 uFarNear;
float depthValue(const in vec3 v) {
vec3 absv = abs(v);
float z = max(absv.x, max(absv.y, absv.z));
return uFarNear.x + uFarNear.y / z;
}
float shadowCoef() {
vec3 L;
float d;
L = vPosWS - light.position_ws;
d = depthValue(L);
return texture(uShadMap, vec4(L, d));
}
This may require uniform model matrix if you only have ModelViewProjection (MVP)
Here how to calculate uNearFar at client side:
float n, f, nfsub, nf[2];
n = sm->near;
f = sm->far;
nfsub = f - n;
nf[0] = (f + n) / nfsub * 0.5f + 0.5f;
nf[1] =-(f * n) / nfsub;
glUniform2f(gkUniformLoc(prog, "uFarNear"), nf[0], nf[1]);
this is just optimization but you don't have to use this and follow the link which I mentioned before.
You may need bias value, related answer uses bias but I'm not sure how to apply it to cubemap correctly. I'm not sure d -+ 0.0001 is correct way or not.
If you want to store world distances in cubemap then this tutorial seems god one: https://learnopengl.com/Advanced-Lighting/Shadows/Point-Shadows

Smoothly transition from orthographic projection to perspective projection?

I'm developing a game that consists of 2 stages, one of these has an orthographic projection, and the other stage has a perspective projection.
Currently when we go between modes we fade to black, and then come back in the new camera mode.
How would I go about smoothly transitioning between the two?
There are probably a handful of ways of accomplishing this, the two I found that seemed like they would work the best were:
Lerping all the matrix elements from one matrix to the other. Apparently this works pretty well all things considered. I don't believe this transition will appear linear, though. You could try to give it an easing function instead of doing the interpolation linearly
A dolly zoom on the perspective matrix going to/from a near 0 field of view. You would pop from the orthographic matrix to the near 0 perspective matrix and lerp the fov out to your target, and probably be heavily tweaking the near/far planes as you go. In reverse you would lerp to 0 and then pop to the orthographic matrix. The idea behind this being that things appear flatter with a lower fov and that a fov of 0 is essentially an orthographic projection. This is more complex but can also be tweaked a whole lot more.
If you have access to a programmable pipeline (a.k.a. shaders), you can do the transition in the vertex shader. I have found that this works very well and does not introduce artifacts. Here's a GLSL code snippet:
#version 150
uniform mat4 uModelMatrix;
uniform mat4 uViewMatrix;
uniform mat4 uProjectionMatrix;
uniform float uNearClipPlane = 1.0;
uniform vec2 uPerspToOrtho = vec2( 0.0 );
in vec4 inPosition;
void main( void )
{
// Calculate view space position.
vec4 view = uViewMatrix * uModelMatrix * inPosition;
// Scale x&y to 'undo' perspective projection.
view.x = mix( view.x, view.x * ( -view.z / uNearClipPlane ), uPerspToOrtho.x );
view.y = mix( view.y, view.y * ( -view.z / uNearClipPlane ), uPerspToOrtho.y );
// Output clip space coordinate.
gl_Position = uProjectionMatrix * view;
}
In the code, uPerspToOrtho is a vec2 (e.g. a float2) that contains a value in the range [0..1]. When set to 0, your coordinates will use perspective projection (assuming your projection matrix is a perspective one). When set to 1, your coordinates will behave as if projected by an orthographic projection matrix. You can do this separately for the X- and Y-axes.
'uNearClipPlane' is the near plane distance, which is the value you used to create the perspective projection matrix.
When converting this to HLSL, you may need to use view.z instead of -view.z, but I could be wrong.
I hope you find this useful.
Edit: instead of passing in the near clip plane distance, you can also extract it from the projection matrix. For OpenGL, this is how:
float zNear = 2.0 * uProjectionMatrix[3][2] / ( 2.0 * uProjectionMatrix[2][2] - 2.0 );
Edit 2: you can optimize the code by doing the scaling on x and y at the same time:
view.xy = mix( view.xy, view.xy * ( -view.z / uNearClipPlane ), uPerspToOrtho.xy );
To get rid of the division, you could multiply by the inverse near plane distance:
uniform float uInvNearClipPlane; // = 1.0 / zNear
I managed to do this without the explicit use of matrices. I used Java so the syntax is different but comparable. One of the things I used was this mix() function. It returns value1 when factor is 1 and value2 when factor is 0, and has a linear transition for every value in between.
private double mix(double value1, double value2, double factor)
{
return (value1 * factor) + (value2 * (1 - factor));
}
When I call this function, I use value1 for perspective and value2 for orthographic, like so:mix(focalLength/voxel.z, orthoZoom, factor)
When determining your focal length and orthographic zoom factor, it is helpful to know that anything at distance focalLength/orthoZoom away from the camera will project to the same point throughout the transition.
Hope this helps. You can download my program to see how it looks at https://github.com/npetrangelo/3rd-Dimension/releases.

shadow mapping - transforming a view space position to the shadow map space

I use deferred rendering and I store a fragment position in the camera view space. When I perform a shadow calculation I need to transform a camera view space to the shadow map space. I build a shadow matrix this way:
shadowMatrix = shadowBiasMatrix * lightProjectionMatrix * lightViewMatrix * inverseCameraViewMatrix;
shadowBiasMatrix shifts values from [-1,1] to [0,1] range.
lightProjectionMatrix that's orthographic projection matrix for a directional light. lightViewMatrix looks at the frustum center and contains a light direction.
inverseCameraViewMatrix transforms a fragment position from a camera view space to the world space.
I wonder if it is correct to multiply the inverse camera view matrix with the other matrices ? Maybe I should use the inverse camera view matrix separately ?
First scenario:
vec4 shadowCoord = shadowMatrix * vec4(cameraViewSpacePosition, 1.0);
Second scenario, a inverse camera view matrix is use separately:
vec4 worldSpacePosition = inverseCameraViewSpaceMatrix * vec4(cameraViewSpacePosition, 1.0);
vec4 shadowCoord = shadowMatrix * worldSpacePosition;
Precomputing the shadow matrix in the described way is the correct approach and should work as expected.
Because of the associativity of matrix multiplication the results of the three scenarios should be identical (ignoring floating point precision) and are thus interchangeable.
But because these calculations are done in the fragment shader, it is best to premultiply the matrixes in the main program to do as few operations as possible per fragment.
I'm also writing a deferred renderer currently and calculate my matrices in the same way without any problems.
// precomputed: lightspace_mat = light_projection * light_view * inverse_cam_view
// calculate position in clip-space of the lightsource
vec4 lightspace_pos = lightspace_mat * vec4(viewspace_pos, 1.0);
// perspective divide
lightspace_pos/=lightspace_pos.w;
// move range from [-1.0, 1.0] to [0.0, 1.0]
lightspace_pos = lightspace_pos * vec4(0.5) + vec4(0.5);
// sample shadowmap
float shadowmap_depth = texture(shadowmap, lightspace_pos.xy).r;
float fragment_depth = lightspace_pos.z;
I also found this tutorial using a simillar approach, that could be helpfull: http://www.codinglabs.net/tutorial_opengl_deferred_rendering_shadow_mapping.aspx
float readShadowMap(vec3 eyeDir)
{
mat4 cameraViewToWorldMatrix = inverse(worldToCameraViewMatrix);
mat4 cameraViewToProjectedLightSpace = lightViewToProjectionMatrix * worldToLightViewMatrix * cameraViewToWorldMatrix;
vec4 projectedEyeDir = cameraViewToProjectedLightSpace * vec4(eyeDir,1);
projectedEyeDir = projectedEyeDir/projectedEyeDir.w;
vec2 textureCoordinates = projectedEyeDir.xy * vec2(0.5,0.5) + vec2(0.5,0.5);
const float bias = 0.0001;
float depthValue = texture2D( tShadowMap, textureCoordinates ) - bias;
return projectedEyeDir.z * 0.5 + 0.5 < depthValue;
}
The eyeDir that comes in input is in View Space. To find the pixel in
the shadow map we need to take that point and covert it into the
light's clip space, which means going from Camera View Space into
World Space, then into Light View Space, than into Light Projection
Space/Clip space. All these transformations are done using matrices;
if you are not familiar with space changes you may want to read my
article about spaces and transformations.
Once we are in the right space we calculate the texture coordinates
and we are finally ready to read from the shadow map. Bias is a small
offset that we apply to the values in the map to avoid that because of
rounding errors a point ends up shading itself! So we shift all the
map back a bit so that all the values in the map are slightly smaller
than they should.

reconstructed world position from depth is wrong

I'm trying to implement deferred shading/lighting. In order to reduce the number/size of the buffers I use I wanted to use the depth texture to reconstruct world position later on.
I do this by multiplying the pixel's coordinates with the inverse of the projection matrix and the inverse of the camera matrix. This sort of works, but the position is a bit off. Here's the absolute difference with a sampled world position texture:
For reference, this is the code I use in the second pass fragment shader:
vec2 screenPosition_texture = vec2((gl_FragCoord.x)/WIDTH, (gl_FragCoord.y)/HEIGHT);
float pixelDepth = texture2D(depth, screenPosition_texture).x;
vec4 worldPosition = pMatInverse*vec4(VertexIn.position, pixelDepth, 1.0);
worldPosition = vec4(worldPosition.xyz/worldPosition.w, 1.0);
//worldPosition /= 1.85;
worldPosition = cMatInverse*worldPosition_byDepth;
If I uncomment worldPosition /= 1.85, the position is reconstructed a lot better (on my geometry/range of depth values). I just got this value by messing around after comparing my output with what it should be (stored in a third texture).
I'm using 0.1 near, 100.0 far and my geometries are up to about 15 away.
I know there may be precision errors, but this seems a bit too big of an error too close to the camera.
Did I miss anything here?
As mentioned in a comment:
I didn't convert the depth value from NDC space to clip space.
I should have added this line:
pixelDepth = pixelDepth * 2.0 - 1.0;