I have a shader which adds lighting to an otherwise 2D scene (lights are slightly above the 2D plane). In my fragment shader, I loop through each light to calculate the direction and distance by applying my ortho matrix to the light's pos.
The problem is, the light's "radius" is affected by the size and aspect ratio of my window. I thought that translating the coordinates using the ortho matrix would ensure that the screen size wouldn't matter, but a wide window produces an oval light, and smaller windows produce smaller ovals than larger windows. Should I be using another matrix of some sort?
Full shader here (change window size to see the unwanted effect): http://glsl.heroku.com/e#14464.0
uniform vec2 resolution;
void main(void)
{
//orthographic matrix
mat4 ortho_matrix = mat4(
2.0/resolution.x, 0, 0, 0,
0, 2.0/-resolution.y, 0, 0,
0, 0, -1, 0,
-1, 1, 0, 1
);
//surface normal of the 2D plane (looking straight down)
vec3 surface_normal = vec3(0.0, 0.0, 1.0);
//screen position of the light
vec2 light_screen_pos = vec2(650, 150);
//translate light's position to normalized coordinates
//the z value makes sure it is slightly above the 2D plane
vec4 light_ortho_pos = ortho_matrix * vec4(light_screen_pos, -0.03, 1.0);
//calculate the light for this fragment
vec3 light_direction = light_ortho_pos.xyz - vec3(gl_FragCoord.x / resolution.x, gl_FragCoord.y / resolution.y, 0);
float dist = length(light_direction);
light_direction = normalize(light_direction);
vec3 light = clamp(dot(surface_normal, light_direction), 0.0, 1.0) * vec3(0.5, 0.5, 0.5);
vec3 cel_light = step(0.15, (light.r + light.g + light.b) / 3.0) * light;
gl_FragColor = vec4(pow(light + cel_light, vec3(0.4545)), 1.0);
}
Note: I know it's not optimal to make this calculation for each light, each pixel - I should be passing the light's position in another uniform probably.
The light direction needs to be scaled according to screen resolution. I ended up adding the following code to make it work, with an arbitrary brightness of 500 or so:
light_direction *= vec3(resolution.x / brightness, resolution.y / brightness, 1.0);
Related
I want to move from basic shadow mapping on to adaptive biased shadow mapping.
I found a paper which describes how to do it, but I am not sure how to achieve a certain step in the process:
The idea is to have a plane P (which is basically just the normal of the current fragment's surface in the fragment shader stage) and the world space position of the fragment (F1 in the picture above).
In order to calculate the correct bias (to fight shadow acne) I need to find the world space position of F2 which I can get if I shoot a ray from the light source through the center of the shadow map's texel center. This ray then eventually hits the plane P which results in the needed point F2.
With F1 and F2 now known, I then can calculate the distance between F1 and F2 along the light ray (I guess) and thus get the ideal bias to fight shadow acne.
Right now my basic shader code looks like this:
Vertex shader:
in vec3 aLocalObjectPos;
out vec4 vShadowCoord;
out vec3 vF1;
// to shift the coordinates from [-1;1] to [0;1]
const mat4 biasMatrix = mat4(
0.5, 0.0, 0.0, 0.0,
0.0, 0.5, 0.0, 0.0,
0.0, 0.0, 0.5, 0.0,
0.5, 0.5, 0.5, 1.0
);
int main()
{
// get the vertex position in the light's view space:
vShadowCoord = (biasMatrix * viewProjShadowMap * modelMatrix) * vec4(aLocalObjectPos, 1.0);
vF1 = (modelMatrix * vec4(aLocalObjectPos, 1.0)).xyz;
}
Helper method in fragment shader:
uniform sampler2DShadow uTextureShadowMap;
float calculateShadow(float bias)
{
vShadowCoord.z -= bias;
return textureProjOffset(uTextureShadowMap, vShadowCoord, ivec2(0, 0));
}
My problem now is:
How do I get the light ray that goes from the light source through the shadow map's texel center?
I already found this topic: Adaptive Depth Bias for Shadow Maps Ray Casting
Unfortunately there is no answer and I don't quite get all the things the author is talking about :-/
So, I think I have figured it out myself. I followed the directions in this paper:
http://cwyman.org/papers/i3d14_adaptiveBias.pdf
Vertex Shader (not much going on there):
const mat4 biasMatrix = mat4(
0.5, 0.0, 0.0, 0.0,
0.0, 0.5, 0.0, 0.0,
0.0, 0.0, 0.5, 0.0,
0.5, 0.5, 0.5, 1.0
);
in vec4 aPosition; // vertex in model's local space (not modified in any way)
uniform mat4 uVPShadowMap; // light's view-projection matrix
out vec4 vShadowCoord;
void main()
{
// ...
vShadowCoord = (biasMatrix * uVPShadowMap * uModelMatrix) * aPosition;
// ...
}
Fragment Shader:
#version 450
in vec3 vFragmentWorldSpace; // fragment position in World space
in vec4 vShadowCoord; // texture coordinates for shadow map lookup (see vertex shader)
uniform sampler2DShadow uTextureShadowMap;
uniform vec4 uLightPosition; // Light's position in world space
uniform vec2 uLightNearFar; // Light's zNear and zFar values
uniform float uK; // variable offset faktor to tweak the computed bias a little bit
uniform mat4 uVPShadowMap; // light's view-projection matrix
const vec4 corners[2] = vec4[]( // frustum diagonal points in light's view space normalized [-1;+1]
vec4(-1.0, -1.0, -1.0, 1.0), // left bottom near
vec4( 1.0, 1.0, 1.0, 1.0) // right top far
);
float calculateShadowIntensity(vec3 fragmentNormal)
{
// get fragment's position in light space:
vec4 fragmentLightSpace = uVPShadowMap * vec4(vFragmentWorldSpace, 1.0);
vec3 fragmentLightSpaceNormalized = fragmentLightSpace.xyz / fragmentLightSpace.w; // range [-1;+1]
vec3 fragmentLightSpaceNormalizedUV = fragmentLightSpaceNormalized * 0.5 + vec3(0.5, 0.5, 0.5); // range [ 0; 1]
// get shadow map's texture size:
ivec2 textureDimensions = textureSize(uTextureShadowMap, 0);
vec2 delta = vec2(textureDimensions.x, textureDimensions.y);
// get width of every texel:
vec2 textureStep = vec2(1.0 / textureDimensions.x, 1.0 / textureDimensions.y);
// get the UV coordinates of the texel center:
vec2 fragmentLightSpaceUVScaled = fragmentLightSpaceNormalizedUV.xy * delta;
vec2 texelCenterUV = floor(fragmentLightSpaceUVScaled) * textureStep + textureStep / 2;
// convert range for texel center in light space in range [-1;+1]:
vec2 texelCenterLightSpaceNormalized = 2.0 * texelCenterUV - vec2(1.0, 1.0);
// recreate light ray in world space:
vec4 recreatedVec4 = vec4(texelCenterLightSpaceNormalized.x, texelCenterLightSpaceNormalized.y, -uLightsNearFar.x, 1.0);
mat4 vpShadowMapInversed = inverse(uVPShadowMap);
vec4 texelCenterWorldSpace = vpShadowMapInversed * recreatedVec4;
vec3 lightRayNormalized = normalize(texelCenterWorldSpace.xyz - uLightsPositions.xyz);
// compute scene scale for epsilon computation:
vec4 frustum1 = vpShadowMapInversed * corners[0];
frustum1 = frustum1 / frustum1.w;
vec4 frustum2 = vpShadowMapInversed * corners[1];
frustum2 = frustum2 / frustum2.w;
float ln = uLightNearFar.x;
float lf = uLightNearFar.y;
// compute light ray intersection with fragment plane:
float dotLightRayfragmentNormal = dot(fragmentNormal, lightRayNormalized);
float d = dot(fragmentNormal, vFragmentWorldSpace);
float x = (d - dot(fragmentNormal, uLightsPositions.xyz)) / dot(fragmentNormal, lightRayNormalized);
vec4 intersectionWorldSpace = vec4(uLightsPositions.xyz + lightRayNormalized * x, 1.0);
// compute bias:
vec4 texelInLightSpace = uVPShadowMap * intersectionWorldSpace;
float intersectionDepthTexelCenterUV = (texelInLightSpace.z / texelInLightSpace.w) / 2.0 + 0.5;
float fragmentDepthLightSpaceUV = fragmentLightSpaceNormalizedUV.z;
float bias = intersectionDepthTexelCenterUV - fragmentDepthLightSpaceUV;
float depthCompressionResult = pow(lf - fragmentLightSpaceNormalizedUV.z * (lf - ln), 2.0) / (lf * ln * (lf - ln));
float epsilon = depthCompressionResult * length(frustum1.xyz - frustum2.xyz) * uK;
bias += epsilon;
vec4 shadowCoord = vShadowCoord;
shadowCoord.z -= bias;
float shadowValue = textureProj(uTextureShadowMap, shadowCoord);
return max(shadowValue, 0.0);
}
Please note that this is a very verbose method (you could optimise several steps, I know) to better explain what I did to make it work.
All my shadow casting lights use perspective projection.
I tested the results on the CPU side in a separate project (only c# with the math structs from the OpenTK package) and they seem reasonable. I used several light positions, texture sizes, etc. The bias values looked ok in all my tests. Of course, this is no proof, but I have a good feeling about this.
In the end:
The benefits were very small. The visual results are good (especially for shadow maps with >= 2048 samples per dimension) but I still had to tweak the offset value (uniform float uK in the fragment shader) for each of my scenes. I found values from 0.01 to 0.03 to deliver useable results.
I lost about 10% performance (fps-wise) compared to my previous approach (slope-scaled bias) and gained maybe 1% of visual fidelity when it comes to shadows (acne, peter panning). The 1% is not measured - only felt by me :-)
I wanted this approach to be the "one-solution-to-all-problems". But I guess, there is no "fire-and-forget" solution when it comes to shadow mapping ;-/
I'm programing a GUI library in openGL and decided to add rounded corners because I feel like it gives a much more professional look to the units.
I've implemented the common
length(max(abs(p) - b, 0.0)) - radius
method and it almost works perfectly except for the fact tat the corners seems as though they are stretched:
My fragment shader:
in vec2 passTexCoords;
uniform vec4 color;
uniform int width;
uniform int height;
uniform int radius;
void main() {
fragment = color;
vec2 pos = (abs(passTexCoords - 0.5) + 0.5) * vec2(width, height);
float alpha = 1.0 - clamp(length(max(pos - (vec2(width, height) - radius), 0.0)) - radius, 0.0, 1.0);
fragment.a = alpha;
}
The stretching does make sense to me but when I replace with
vec2 pos = (abs(passTexCoords - 0.5) + 0.5) * vec2(width, height) * vec2(scaleX, scaleY);
and
float alpha = 1.0 - clamp(length(max(pos - (vec2(width, height) * vec2(scaleX, scaleY) - radius), 0.0)) - radius, 0.0, 1.0);
(where scaleX and scaleY are scalars between 0.0 and 1.0 that represent the width and height of the rectangle relative to the screen) the rectangle almost completely disappears:
The problem is that the distances are not scaled into screen space, and are therefore stretched across the greatest window axis as a result. You can fix this if you multiply the normalized position by the aspect ratio of the screen, along with the other parameters for the box. I wrote an example on Shadertoy that does this:
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
// Input info
vec2 boxPos; // The position of the center of the box (in normalized coordinates)
vec2 boxBnd; // The half-bounds (radii) of the box (in normalzied coordinates)
float radius;// Radius
boxPos = vec2(0.5, 0.5); // center of the screen
boxBnd = vec2(0.25, 0.25); // half of the area
radius = 0.1;
// Normalize the pixel coordinates (this is "passTexCoords" in your case)
vec2 uv = fragCoord/iResolution.xy;
// (Note: iResolution.xy holds the x and y dimensions of the window in pixels)
vec2 aspectRatio = vec2(iResolution.x/iResolution.y, 1.0);
// In order to make sure visual distances are preserved, we multiply everything by aspectRatio
uv *= aspectRatio;
boxPos *= aspectRatio;
boxBnd *= aspectRatio;
// Time varying pixel color
vec3 col = 0.5 + 0.5*cos(iTime+uv.xyx+vec3(0,2,4));
// Output to screen
float alpha = length(max(abs(uv - boxPos) - boxBnd, 0.0)) - radius;
// Shadertoy doesn't have an alpha in this case
if(alpha <= 0.0){
fragColor = vec4(col,1.0);
}else{
fragColor = vec4(0.0, 0.0, 0.0, 1.0);
}
}
There may be a less computationally expensive way to do this, but this was a simple solution I cooked up.
I assume the passTexCoords is a a texture coordinate in range [0, 1]. And that width and height is the size of the screen. And scaleX and scaleY is the ration of the green area to the size of the screen.
Calculate the absolute position (pos) of the current fragment in relation to the center of the green area in pixel units:
vec2 pos = (abs(passTexCoords - 0.5) + 0.5) * vec2(width*scaleX, height*scaleY);
Calculate the distance from the center point of the arc, to the current fragment:
vec2 arc_cpt_vec = max(pos - vec2(width*scaleX, height*scaleY) + radius, 0.0);
If the length of the vector is greater than the radius, then the fragment has to be skipped:
float alpha = length(arc_cpt_vec) > radius ? 0.0 : 1.0;
I want to show some fog / aerial view in my application. But I only want to use the x,y world distance from camera to the model to determine the appearance.
I already managed to get the signed z-distance from camera to the models with this calculation.
The red objects have positive z distance to camera, the blue ones are negative in contrast to this implementation, where all values seem positive.
Vertex shader:
uniform mat4 u_mvp; // Model-View-Projection-Matrix
uniform mat4 u_mv; // Model-View-Matrix
uniform vec4 u_color; // Object color
attribute vec4 a_pos; // Vertex position
varying vec4 color; // Out color
// Fog
const float density = 0.007;
const float gradient = 1.5;
void main() {
gl_Position = u_mvp * a_pos;
// Fog
float distance = -(u_mv * a_pos).z; // Direct distance from camera
// 4000 is some invented constant to bring distance to ~[-1,1].
float visibility = clamp((distance / 4000.0), 0.0, 1.0);
color = mix(vec4(1.0, 0.0, 0.0, 1.0), u_color, visibility);
if(distance < 0){
color = vec4(0.0, 0.0, 1.0, 1.0);
}
}
Fragment shader:
varying vec4 color;
void main() {
gl_FragColor = color;
}
Why there can be a negative z-value? Or is it common?
How can I calculate the x,y world distance to camera?
If you want to get the distance to the camera, in the range [-1, 1], then you can use the clips pace coordinated. The clipspace coordinate can be transformed to a normalized device coordinate by Perspective divide. The normalized device coordinates (x, y and z) are in range [-1, 1] and can be transformed to the range [0, 1] with ease:
gl_Position = u_mvp * a_pos; // clip space
vec3 ndc = gl_Position.xyz / gl_Position.w; // NDC in [-1, 1] (by perspective divide)
float depth = ndc.z * 0.5 + 0.5; // depth in [0, 1]
I'm trying to draw a simple sphere with normal mapping in the fragment shader with GL_POINTS. At present, I simply draw one point on the screen and apply a fragment shader to "spherify" it.
However, I'm having trouble colouring the sphere correctly (or at least I think I am). It seems that I'm calculating the z correctly but when I apply the 'normal' colours to gl_FragColor it just doesn't look quite right (or is this what one would expect from a normal map?). I'm assuming there is some inconsistency between gl_PointCoord and the fragment coord, but I can't quite figure it out.
Vertex shader
precision mediump float;
attribute vec3 position;
void main() {
gl_PointSize = 500.0;
gl_Position = vec4(position.xyz, 1.0);
}
fragment shader
precision mediump float;
void main() {
float x = gl_PointCoord.x * 2.0 - 1.0;
float y = gl_PointCoord.y * 2.0 - 1.0;
float z = sqrt(1.0 - (pow(x, 2.0) + pow(y, 2.0)));
vec3 position = vec3(x, y, z);
float mag = dot(position.xy, position.xy);
if(mag > 1.0) discard;
vec3 normal = normalize(position);
gl_FragColor = vec4(normal, 1.0);
}
Actual output:
Expected output:
The color channels are clamped to the range [0, 1]. (0, 0, 0) is black and (1, 1, 1) is completely white.
Since the normal vector is normalized, its component are in the range [-1, 1].
To get the expected result you have to map the normal vector from the range [-1, 1] to [0, 1]:
vec3 normal_col = normalize(position) * 0.5 + 0.5;
gl_FragColor = vec4(normal_col, 1.0);
If you use the abs value, then a positive and negative value with the same size have the same color representation. The intensity of the color increases with the grad of the value:
vec3 normal_col = abs(normalize(position));
gl_FragColor = vec4(normal_col, 1.0);
First of all, the normal facing the camera [0,0,-1] should be rgb values: [0.5,0.5,1.0]. You have to rescale things to move those negative values to be between 0 and 1.
Second, the normals of a sphere would not change linearly, but in a sine wave. So you need some trigonometry here. It makes sense to me to to start with the perpendicular normal [0,0,-1] and then then rotate that normal by an angle, because that angle is what changing linearly.
As a result of playing around this I came up with this:
http://glslsandbox.com/e#50268.3
which uses some rotation function from here: https://github.com/yuichiroharai/glsl-y-rotate
I'm following the tutorial by John Chapman (http://john-chapman-graphics.blogspot.nl/2013/01/ssao-tutorial.html) to implement SSAO in a deferred renderer. The input buffers to the SSAO shaders are:
World-space positions with linearized depth as w-component.
World-space normal vectors
Noise 4x4 texture
I'll first list the complete shader and then briefly walk through the steps:
#version 330 core
in VS_OUT {
vec2 TexCoords;
} fs_in;
uniform sampler2D texPosDepth;
uniform sampler2D texNormalSpec;
uniform sampler2D texNoise;
uniform vec3 samples[64];
uniform mat4 projection;
uniform mat4 view;
uniform mat3 viewNormal; // transpose(inverse(mat3(view)))
const vec2 noiseScale = vec2(800.0f/4.0f, 600.0f/4.0f);
const float radius = 5.0;
void main( void )
{
float linearDepth = texture(texPosDepth, fs_in.TexCoords).w;
// Fragment's view space position and normal
vec3 fragPos_World = texture(texPosDepth, fs_in.TexCoords).xyz;
vec3 origin = vec3(view * vec4(fragPos_World, 1.0));
vec3 normal = texture(texNormalSpec, fs_in.TexCoords).xyz;
normal = normalize(normal * 2.0 - 1.0);
normal = normalize(viewNormal * normal); // Normal from world to view-space
// Use change-of-basis matrix to reorient sample kernel around origin's normal
vec3 rvec = texture(texNoise, fs_in.TexCoords * noiseScale).xyz;
vec3 tangent = normalize(rvec - normal * dot(rvec, normal));
vec3 bitangent = cross(normal, tangent);
mat3 tbn = mat3(tangent, bitangent, normal);
// Loop through the sample kernel
float occlusion = 0.0;
for(int i = 0; i < 64; ++i)
{
// get sample position
vec3 sample = tbn * samples[i]; // From tangent to view-space
sample = sample * radius + origin;
// project sample position (to sample texture) (to get position on screen/texture)
vec4 offset = vec4(sample, 1.0);
offset = projection * offset;
offset.xy /= offset.w;
offset.xy = offset.xy * 0.5 + 0.5;
// get sample depth
float sampleDepth = texture(texPosDepth, offset.xy).w;
// range check & accumulate
// float rangeCheck = abs(origin.z - sampleDepth) < radius ? 1.0 : 0.0;
occlusion += (sampleDepth <= sample.z ? 1.0 : 0.0);
}
occlusion = 1.0 - (occlusion / 64.0f);
gl_FragColor = vec4(vec3(occlusion), 1.0);
}
The result is however not pleasing. The occlusion buffer is mostly all white and doesn't show any occlusion. However, if I move really close to an object I can see some weird noise-like results as you can see below:
This is obviously not correct. I've done a fair share of debugging and believe all the relevant variables are correctly passed around (they all visualize as colors). I do the calculations in view-space.
I'll briefly walk through the steps (and choices) I've taken in case any of you figure something goes wrong in one of the steps.
view-space positions/normals
John Chapman retrieves the view-space position using a view ray and a linearized depth value. Since I use a deferred renderer that already has the world-space positions per fragment I simply take those and multiply them with the view matrix to get them to view-space.
I take a similar approach for the normal vectors. I take the world-space normal vectors from a buffer texture, transform them to [-1,1] range and multiply them with transpose(inverse(mat3(..))) of view matrix.
The view-space position and normals are visualized as below:
This looks correct to me.
Orient hemisphere around normal
The steps to create the tbn matrix are the same as described in John Chapman's tutorial. I create the noise texture as follows:
std::vector<glm::vec3> ssaoNoise;
for (GLuint i = 0; i < noise_size; i++)
{
glm::vec3 noise(randomFloats(generator) * 2.0 - 1.0, randomFloats(generator) * 2.0 - 1.0, 0.0f);
noise = glm::normalize(noise);
ssaoNoise.push_back(noise);
}
...
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16F, 4, 4, 0, GL_RGB, GL_FLOAT, &ssaoNoise[0]);
I can visualize the noise in the fragment shader so that seems to work.
sample depths
I transform all samples from tangent to view-space (samples are random between [-1,1] on xy axis and [0,1] on z-axis and translate them to fragment's current view-space position (origin).
I then sample from linearized depth buffer (which I visualize below when looking close to an object):
and finally compare sampled depth values to current fragment's depth value and add occlusion values. Note that I do not perform a range-check since I don't believe that is the cause of this behavior and I'd rather keep it as minimal as possible for now.
I don't know what is causing this behavior. I believe it is somewhere in sampling the depth values. As far as I can tell I am working in the right coordinate system, linearized depth values are in view-space as well and all variables are set somewhat properly.