Drawing a sphere normal map in the fragment shader - glsl

I'm trying to draw a simple sphere with normal mapping in the fragment shader with GL_POINTS. At present, I simply draw one point on the screen and apply a fragment shader to "spherify" it.
However, I'm having trouble colouring the sphere correctly (or at least I think I am). It seems that I'm calculating the z correctly but when I apply the 'normal' colours to gl_FragColor it just doesn't look quite right (or is this what one would expect from a normal map?). I'm assuming there is some inconsistency between gl_PointCoord and the fragment coord, but I can't quite figure it out.
Vertex shader
precision mediump float;
attribute vec3 position;
void main() {
gl_PointSize = 500.0;
gl_Position = vec4(position.xyz, 1.0);
}
fragment shader
precision mediump float;
void main() {
float x = gl_PointCoord.x * 2.0 - 1.0;
float y = gl_PointCoord.y * 2.0 - 1.0;
float z = sqrt(1.0 - (pow(x, 2.0) + pow(y, 2.0)));
vec3 position = vec3(x, y, z);
float mag = dot(position.xy, position.xy);
if(mag > 1.0) discard;
vec3 normal = normalize(position);
gl_FragColor = vec4(normal, 1.0);
}
Actual output:
Expected output:

The color channels are clamped to the range [0, 1]. (0, 0, 0) is black and (1, 1, 1) is completely white.
Since the normal vector is normalized, its component are in the range [-1, 1].
To get the expected result you have to map the normal vector from the range [-1, 1] to [0, 1]:
vec3 normal_col = normalize(position) * 0.5 + 0.5;
gl_FragColor = vec4(normal_col, 1.0);
If you use the abs value, then a positive and negative value with the same size have the same color representation. The intensity of the color increases with the grad of the value:
vec3 normal_col = abs(normalize(position));
gl_FragColor = vec4(normal_col, 1.0);

First of all, the normal facing the camera [0,0,-1] should be rgb values: [0.5,0.5,1.0]. You have to rescale things to move those negative values to be between 0 and 1.
Second, the normals of a sphere would not change linearly, but in a sine wave. So you need some trigonometry here. It makes sense to me to to start with the perpendicular normal [0,0,-1] and then then rotate that normal by an angle, because that angle is what changing linearly.
As a result of playing around this I came up with this:
http://glslsandbox.com/e#50268.3
which uses some rotation function from here: https://github.com/yuichiroharai/glsl-y-rotate

Related

Adaptive Depth Bias for Shadow Maps Ray Casting

I have found this paper dealing with how to compute the perfect bias when dealing with shadow map.
The idea is to:
get the texel used when sampling the shadowMap
project the texel location back to eyeSpace (ray tracing)
get the difference between your frament.z and the intersection with
the fragment's face.
This way you have calculated the error which serve as the appropriate bias for z-fighting.
Now I am trying to implement it, but I experiment some troubles:
I am using a OrthoProjectionMatrix, so i think I don't need to divide by w back and forth.
I am good until I am computing the ray intersection with the face.
I have a lot of faces failing the test, and my bias is way to important.
This is my fragment shader code:
float getBias(float depthFromTexture)
{
vec3 n = lightFragNormal.xyz;
//no need to divide by w, we got an ortho projection
//we are in NDC [-1,1] we go to [0,1]
//vec4 smTexCoord = 0.5 * shadowCoord + vec4(0.5, 0.5, 0.5, 0.0);
vec4 smTexCoord = shadowCoord;
//we are in [0,1] we go to texture_space [0,1]->[0,shadowMap.dimension]:[0,1024]
//get the nearest index in the shadow map, the texel corresponding to our fragment we use floor (125.6,237.9) -> (125,237)
vec2 delta = vec2(xPixelOffset, yPixelOffset);
vec2 textureDim = vec2(1/xPixelOffset, 1/yPixelOffset);
vec2 index = floor(smTexCoord.xy * textureDim);
//we get the center of the current texel, we had 0.5 to put us in the middle (125,237) -> (125.5,237.5)
//we go back to [0,1024] -> [0,1], (125.5,237.5) -> (0.12, 0.23)
vec2 nlsGridCenter = delta*(index + vec2(0.5f, 0.5f));
// go back to NDC [0,1] -> [-1,1]
vec2 lsGridCenter = 2.0 * nlsGridCenter - vec2(1.0);
//compute lightSpace grid direction, multiply by the inverse projection matrice or
vec4 lsGridCenter4 = inverse(lightProjectionMatrix) * vec4(lsGridCenter, -frustrumNear, 0);
vec3 lsGridLineDir = vec3(normalize(lsGridCenter4));
/** Plane ray intersection **/
// Locate the potential occluder for the shading fragment
//compute the distance t we need to continue in the gridDir direction, the point is "t" far
float ls_t_hit = dot(n, lightFragmentCoord.xyz) / dot(n, lsGridLineDir);
if(ls_t_hit<=0){
return 0; // i got a lot of negativ values it shouldn t be the case
}
//compute the point p with the face
vec3 ls_hit_p = ls_t_hit * lsGridLineDir;
float intersectionDepth = lightProjectionMatrix * vec4(ls_hit_p, 1.0f).z / 2 + 0.5;
float fragmentDepth = lightProjectionMatrix * lightFragmentCoord.z / 2 + 0.5;
float result = abs(intersectionDepth - fragmentDepth);
return result;
}
I am struggling with this line:
vec4 lsGridCenter4 = inverse(lightProjectionMatrix) * vec4(lsGridCenter, -frustrumNear, 0);
i don't know if i am correct maybe:
vec4(lsGridCenter, -frustrumNear, 1);
and of course the plane intersection
from wikipedia:
where:
l = my vector normalized direction
Po = a point belonging to the the plane
l0 = offset of the vector, I think it's the origin so in eye space it should be (0,0,0) i might be wrong here
n = normal of the plane, the normal of my fragment in eyespace
in my code:
float ls_t_hit = dot(n, lightFragmentCoord.xyz) / dot(n, lsGridLineDir);

Getting World Position from Depth Buffer Value

I've been working on a deferred renderer to do lighting with, and it works quite well, albeit using a position buffer in my G-buffer. Lighting is done in world space.
I have tried to implement an algorithm to recreate the world space positions from the depth buffer, and the texture coordinates, albeit with no luck.
My vertex shader is nothing particularly special, but this is the part of my fragment shader in which I (attempt to) calculate the world space position:
// Inverse projection matrix
uniform mat4 projMatrixInv;
// Inverse view matrix
uniform mat4 viewMatrixInv;
// texture position from vertex shader
in vec2 TexCoord;
... other uniforms ...
void main() {
// Recalculate the fragment position from the depth buffer
float Depth = texture(gDepth, TexCoord).x;
vec3 FragWorldPos = WorldPosFromDepth(Depth);
... fun lighting code ...
}
// Linearizes a Z buffer value
float CalcLinearZ(float depth) {
const float zFar = 100.0;
const float zNear = 0.1;
// bias it from [0, 1] to [-1, 1]
float linear = zNear / (zFar - depth * (zFar - zNear)) * zFar;
return (linear * 2.0) - 1.0;
}
// this is supposed to get the world position from the depth buffer
vec3 WorldPosFromDepth(float depth) {
float ViewZ = CalcLinearZ(depth);
// Get clip space
vec4 clipSpacePosition = vec4(TexCoord * 2.0 - 1.0, ViewZ, 1);
// Clip space -> View space
vec4 viewSpacePosition = projMatrixInv * clipSpacePosition;
// Perspective division
viewSpacePosition /= viewSpacePosition.w;
// View space -> World space
vec4 worldSpacePosition = viewMatrixInv * viewSpacePosition;
return worldSpacePosition.xyz;
}
I still have my position buffer, and I sample it to compare it against the calculate position later, so everything should be black:
vec3 actualPosition = texture(gPosition, TexCoord).rgb;
vec3 difference = abs(FragWorldPos - actualPosition);
FragColour = vec4(difference, 0.0);
However, what I get is nowhere near the expected result, and of course, lighting doesn't work:
(Try to ignore the blur around the boxes, I was messing around with something else at the time.)
What could cause these issues, and how could I get the position reconstruction from depth working successfully? Thanks.
You are on the right track, but you have not applied the transformations in the correct order.
A quick recap of what you need to accomplish here might help:
Given Texture Coordinates [0,1] and depth [0,1], calculate clip-space position
Do not linearize the depth buffer
Output: w = 1.0 and x,y,z = [-w,w]
Transform from clip-space to view-space (reverse projection)
Use inverse projection matrix
Perform perspective divide
Transform from view-space to world-space (reverse viewing transform)
Use inverse view matrix
The following changes should accomplish that:
// this is supposed to get the world position from the depth buffer
vec3 WorldPosFromDepth(float depth) {
float z = depth * 2.0 - 1.0;
vec4 clipSpacePosition = vec4(TexCoord * 2.0 - 1.0, z, 1.0);
vec4 viewSpacePosition = projMatrixInv * clipSpacePosition;
// Perspective division
viewSpacePosition /= viewSpacePosition.w;
vec4 worldSpacePosition = viewMatrixInv * viewSpacePosition;
return worldSpacePosition.xyz;
}
I would consider changing the name of CalcViewZ (...) though, that is very much misleading. Consider calling it something more appropriate like CalcLinearZ (...).

Texture Warping Shader: Polar to Rectangular Coordinates

I am writing a 2D game using OpenGL and I have planned a shadow casting algorithm which needs a transformation of a texture from Polar Coordinates to Rectangular Coordinates. The desired effect is the following:
From this:
To this:
I know the formulas for converting coordinates between both Polar and Rectangular systems but I am having problems on writing the shader to achieve the desired effect.
My shader receives a texture as an input and should draw the warped texture to the screen. I planned the following (knowing that the fragment shader acts upon one fragment at a time):
Find the coordinates of the current fragment using gl_FragCoord.xy
Determine r and theta that correspond to the point (x, y).
Transform r and theta into texture_x and texture_y (which will be used to sample the texture)
Transfer the sampled pixel to the current fragment
My final result is the same input texture rotated 90 degrees clock-wise. I think that I'm missing something on step 3. I might be just getting the same x and y of the current fragment, because I'm simply using both the transform and inverse transform formulas.
How should I proceed to get the expected result?
Here is my shader:
#version 120
uniform sampler2D tex;
void main() {
vec2 fragCoords = gl_FragCoord.xy - vec2(128, 128); //shift the coordinates so that 0, 0 is in the center of the screen (the final texture is 256 * 256)
fragCoords /= vec2(256, 256);
float r = sqrt(pow(fragCoords.x, 2) + pow(fragCoords.y, 2));
float theta = atan(fragCoords.y, fragCoords.x);
if (fragCoords.y/fragCoords.x <= 0.5 && fragCoords.y/fragCoords.x >= -0.5) {
r *= 1/(256*sin(theta));
} else {
r *= 1/(0.5*256*cos(theta));
}
vec2 texCoords = vec2(r, theta);
vec4 texFrag = texture2D(tex, texCoords);
gl_FragColor = texFrag * vec4(1.0, 0.0, 0.0, 1.0);
}
In your shader you're first translating into polar coordinates
float r = sqrt(pow(fragCoords.x, 2) + pow(fragCoords.y, 2));
float theta = atan(fragCoords.y, fragCoords.x);
and then you't translating them back into cartesian
float tX = r * sin(theta);
float tY = r * cos(theta);
You want to stay in polar coordinates, so just plug r and theta into the texture coordinates
vec2 texCoords = vec2(r , theta);
vec4 texFrag = texture2D(tex, texCoords);
However by the looks of the images you pasted there's some renormalization step involved, so that (r, theta) will cover a rectangular area. If I'm not entirely mistaken, then r is scaled by the distance it takes a ray from the center-bottom to intersect with the rectangular area. If we assume theta=0 to be straight up, then for the range [-atan(0.5)…atan(0.5)] it's scaled by 1/(height*sin(theta)) and outside that range by 1/(0.5*width*cos(theta))

Why does this glsl shader not respect the depth index?

This is a vertex shader I'm currently working with:
attribute vec3 v_pos;
attribute vec4 v_color;
attribute vec2 v_uv;
attribute vec3 v_rotation; // [angle, x, y]
uniform mat4 modelview_mat;
uniform mat4 projection_mat;
varying vec4 frag_color;
varying vec2 uv_vec;
varying mat4 v_rotationMatrix;
void main (void) {
float cos = cos(v_rotation[0]);
float sin = sin(v_rotation[0]);
mat2 trans_rotate = mat2(
cos, -sin,
sin, cos
);
vec2 rotated = trans_rotate * vec2(v_pos[0] - v_rotation[1], v_pos[1] - v_rotation[2]);
gl_Position = projection_mat * modelview_mat * vec4(rotated[0] + v_rotation[1], rotated[1] + v_rotation[2], 1.0, 1.0);
gl_Position[2] = 1.0 - v_pos[2] / 100.0; // Arbitrary maximum depth for this shader.
frag_color = vec4(gl_Position[2], 0.0, 1.0, 1.0); // <----------- !!
uv_vec = v_uv;
}
and the fragment:
varying vec4 frag_color;
varying vec2 uv_vec;
uniform sampler2D tex;
void main (void){
vec4 color = texture2D(tex, uv_vec) * frag_color;
gl_FragColor = color;
}
Notice how I'm manually setting the Z index of the gl_Position variable to be a bound value in the range 0.0 -> 1.0 (the upper bound is done in code; safe to say, not vertex has a z value < 0 or > 100).
It works... mostly. The problem is that when I render it, I get this:
That's not the correct depth sorting for these elements, which have z value respectively, of 15, 50 and 80, as you can see from the red value for each sprite.
The correct order to render in would be blue -> top, purple middle, and pink bottom; but instead these sprites are being rendered in render order.
ie. They are being drawn via:
glDrawArrays() <--- Pink, first geometry batch
glDrawArrays() <--- Blue, second geometry batch
glDrawArrays() <--- Purple, thirds geometry batch
What's going on?
Surely it irrelevant how many times I call gl draw functions before flushing; the depth testing should sort this all out right?
Do you have to manually invoke depth testing inside the fragment shader somehow?
You say you're normalizing the output Z value into the range: 0.0 - 1.0?
It should really be the range: -W - +W. Given an orthographic projection, this means the clip-space Z should range from: -1.0 - +1.0. You are only using half of your depth range, which reduces the resolving capability of the depth buffer significantly.
To make matters worse (and I am pretty sure this is where your actual problem comes from) it looks like you are inverting your depth buffer by giving the farthest points a value of 0.0 and the nearest points 1.0. In actuality, -1.0 corresponds to the near plane and 1.0 corresponds to the far plane in OpenGL.
gl_Position[2] = 1.0 - v_pos[2] / 100.0;
~~~~~~~~~~~~~~
// This needs to be changed, your depth is completely backwards.
gl_Position.z = 2.0 * (v_pos.z / 100.0) - 1.0;
~~~~~~~~~~~~~
// This should fix both the direction, and use the full depth range.
However, it is worth mentioning that now the value of gl_Position[2] or gl_Position.z ranges from -1.0 to 1.0 which means it cannot be used as a visible color without some scaling and biasing:
frag_color = vec4 (gl_Position.z * 0.5 + 0.5, 0.0, 1.0, 1.0); // <----------- !!
On a final note, I have been discussing Normalized Device Coordinates this entire time, not window coordinates. In window coordinates the default depth range is 0.0 = near, 1.0 = far; this may have been the source of some confusion. Understand that window coordinates (gl_FragCoord) are not pertinent to the calculations in the vertex shader.
You can use this in your fragment shader to test if your depth range is setup correctly:
vec4 color = texture2D(tex, uv_vec) * vec4 (gl_FragCoord.z, frag_color.yzw);
It should produce the same results as:
vec4 color = texture2D(tex, uv_vec) * frag_color;

draw the depth value in opengl using shaders

I want to draw the depth buffer in the fragment shader, I do this:
Vertex shader:
varying vec4 position_;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
position_ = gl_ModelViewProjectionMatrix * gl_Vertex;
Fragment shader:
float depth = ((position_.z / position_.w) + 1.0) * 0.5;
gl_FragColor = vec4(depth, depth, depth, 1.0);
But all I print is white, what am I doing wrong?
In what space do you want to draw the depth? If you want to draw the window-space depth, you can do this:
gl_FragColor = vec4(gl_FragCoord.z);
However, this will not be particularly useful, since most of the numbers will be very close to 1.0. Only extremely close objects will be visible. This is the nature of the distribution of depth values for a depth buffer using a standard perspective projection.
Or, to put it another way, that's why you're getting white.
If you want these values in a linear space, you will need to do something like the following:
float ndcDepth = ndcPos.z =
(2.0 * gl_FragCoord.z - gl_DepthRange.near - gl_DepthRange.far) /
(gl_DepthRange.far - gl_DepthRange.near);
float clipDepth = ndcDepth / gl_FragCoord.w;
gl_FragColor = vec4((clipDepth * 0.5) + 0.5);
Indeed, the "depth" value of a fragment can be read from it's z value in clip space (that is, after all matrix transformations). That much is correct.
However, your problem is in the division by w.
Division by w is called perspective divide. Yes, it is necessary for perspective projection to work correctly.
However. Division by w in this case "bunches up" all your values (as you have seen), to being very close to 1.0. There is a good reason for this: in a perspective projection, w= (some multiplier) *z. That is, you are dividing the z value (whatever it was computed out to be) by the (some factor of) original z. No wonder you always get values near 1.0. You're almost dividing z by itself.
As a very simple fix for this, try dividing z just by the farPlane, and send that to the fragment shader as depth.
Vertex shader
varying float DEPTH ;
uniform float FARPLANE ; // send this in as a uniform to the shader
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
DEPTH = gl_Position.z / FARPLANE ; // do not divide by w
Fragment shader:
varying float DEPTH ;
// far things appear white, near things black
gl_Color.rgb=vec3(DEPTH,DEPTH,DEPTH) ;
The result is a not-bad, very linear-looking fade.