i have been trying to implement deferred rendering for past 2 weeks. I have finally come to the spot lighting pass part using stencil buffer and linearized depth. I hold 3 framebuffer textures : albedo, normal+depth (X,Y,Z,EyeViewLinearDepth), Lighting texture. So I draw my light (sphere) and apply this fragment shader :
void main(void)
{
vec2 texCoord = gl_FragCoord.xy * u_inverseScreenSize.xy;
float linearDepth = texture2D(u_normalDepth, texCoord.st).a;
// vector to far plane
vec3 viewRay = vec3(v_vertex.xy * (-farClip/v_vertex.z), -farClip);
// scale viewRay by linear depth to get view space position
vec3 vertex = viewRay * linearDepth;
vec3 normal = texture2D(u_normalDepth, texCoord.st).xyz*2.0 - 1.0;
vec4 ambient = vec4(0.0, 0.0, 0.0, 1.0);
vec4 diffuse = vec4(0.0, 0.0, 0.0, 1.0);
vec4 specular = vec4(0.0, 0.0, 0.0, 1.0);
vec3 lightDir = lightpos - vertex ;
vec3 R = normalize(reflect(lightDir, normal));
vec3 V = normalize(vertex);
float lambert = max(dot(normal, normalize(lightDir)), 0.0);
if (lambert > 0.0) {
float distance = length(lightDir);
if (distance <= u_lightRadius) {
//CLASSICAL LIGHTING COMPUTATION PART
}
}
vec4 final_color = vec4(ambient + diffuse + specular);
gl_FragColor = vec4(final_color.xyz, 1.0);
}
The variables you need to know : v_vertex is eye space position of the vertex (of sphere), lightpos is the position/center of the light in eye space, linearDepth is generated on geometry pass stage in eye space.
The problem is that, the code fail this if check : if (distance <= u_lightRadius). The light is never computed until i remove the distance check. I am sure that i pass these values correctly, radius is 170.0, light position is only like 40-50 units away from the model. There is definitely something wrong but i can't find it somehow. I tried many possibilities of radius and other variables.
Related
The problem
I'm trying to make shaders for my game. In the fragment shaders, I don't want to set the color of the fragment but add to it. How do I do so? I'm new to this, so sorry for any mistakes.
The code
varying vec3 vN;
varying vec3 v;
varying vec4 color;
#define MAX_LIGHTS 1
void main (void)
{
vec3 N = normalize(vN);
vec4 finalColor = vec4(0.0, 0.0, 0.0, 0.0);
for (int i=0;i<MAX_LIGHTS;i++)
{
vec3 L = normalize(gl_LightSource[i].position.xyz - v);
vec3 E = normalize(-v); // we are in Eye Coordinates, so EyePos is (0,0,0)
vec3 R = normalize(-reflect(L,N));
vec4 Iamb = gl_LightSource[i].ambient;
vec4 Idiff = gl_LightSource[i].diffuse * max(dot(N,L), 0.0);
Idiff = clamp(Idiff, 0.0, 1.0);
vec4 Ispec = gl_LightSource[i].specular * pow(max(dot(R,E),0.0),0.3*gl_FrontMaterial.shininess);
Ispec = clamp(Ispec, 0.0, 1.0);
finalColor += Iamb + Idiff + Ispec;
}
gl_FragColor = color * finalColor ;
}
Thanks in advance!
You can not get the color of the fragment you're writing to. In simple scenarios (like yours) what you can do is to enable blending and set the blending functions to achieve additive blending.
For more complex logic you'd create a framebuffer object, attach a texture to it, render your input(e.g. scene) to that texture, then switch to and render with another framebuffer(or the default "screen" one), this way you can sample your scene from the texture and add the lighting on top. Read how to do this in detail on WebGLFundamentals.com
I want to move from basic shadow mapping on to adaptive biased shadow mapping.
I found a paper which describes how to do it, but I am not sure how to achieve a certain step in the process:
The idea is to have a plane P (which is basically just the normal of the current fragment's surface in the fragment shader stage) and the world space position of the fragment (F1 in the picture above).
In order to calculate the correct bias (to fight shadow acne) I need to find the world space position of F2 which I can get if I shoot a ray from the light source through the center of the shadow map's texel center. This ray then eventually hits the plane P which results in the needed point F2.
With F1 and F2 now known, I then can calculate the distance between F1 and F2 along the light ray (I guess) and thus get the ideal bias to fight shadow acne.
Right now my basic shader code looks like this:
Vertex shader:
in vec3 aLocalObjectPos;
out vec4 vShadowCoord;
out vec3 vF1;
// to shift the coordinates from [-1;1] to [0;1]
const mat4 biasMatrix = mat4(
0.5, 0.0, 0.0, 0.0,
0.0, 0.5, 0.0, 0.0,
0.0, 0.0, 0.5, 0.0,
0.5, 0.5, 0.5, 1.0
);
int main()
{
// get the vertex position in the light's view space:
vShadowCoord = (biasMatrix * viewProjShadowMap * modelMatrix) * vec4(aLocalObjectPos, 1.0);
vF1 = (modelMatrix * vec4(aLocalObjectPos, 1.0)).xyz;
}
Helper method in fragment shader:
uniform sampler2DShadow uTextureShadowMap;
float calculateShadow(float bias)
{
vShadowCoord.z -= bias;
return textureProjOffset(uTextureShadowMap, vShadowCoord, ivec2(0, 0));
}
My problem now is:
How do I get the light ray that goes from the light source through the shadow map's texel center?
I already found this topic: Adaptive Depth Bias for Shadow Maps Ray Casting
Unfortunately there is no answer and I don't quite get all the things the author is talking about :-/
So, I think I have figured it out myself. I followed the directions in this paper:
http://cwyman.org/papers/i3d14_adaptiveBias.pdf
Vertex Shader (not much going on there):
const mat4 biasMatrix = mat4(
0.5, 0.0, 0.0, 0.0,
0.0, 0.5, 0.0, 0.0,
0.0, 0.0, 0.5, 0.0,
0.5, 0.5, 0.5, 1.0
);
in vec4 aPosition; // vertex in model's local space (not modified in any way)
uniform mat4 uVPShadowMap; // light's view-projection matrix
out vec4 vShadowCoord;
void main()
{
// ...
vShadowCoord = (biasMatrix * uVPShadowMap * uModelMatrix) * aPosition;
// ...
}
Fragment Shader:
#version 450
in vec3 vFragmentWorldSpace; // fragment position in World space
in vec4 vShadowCoord; // texture coordinates for shadow map lookup (see vertex shader)
uniform sampler2DShadow uTextureShadowMap;
uniform vec4 uLightPosition; // Light's position in world space
uniform vec2 uLightNearFar; // Light's zNear and zFar values
uniform float uK; // variable offset faktor to tweak the computed bias a little bit
uniform mat4 uVPShadowMap; // light's view-projection matrix
const vec4 corners[2] = vec4[]( // frustum diagonal points in light's view space normalized [-1;+1]
vec4(-1.0, -1.0, -1.0, 1.0), // left bottom near
vec4( 1.0, 1.0, 1.0, 1.0) // right top far
);
float calculateShadowIntensity(vec3 fragmentNormal)
{
// get fragment's position in light space:
vec4 fragmentLightSpace = uVPShadowMap * vec4(vFragmentWorldSpace, 1.0);
vec3 fragmentLightSpaceNormalized = fragmentLightSpace.xyz / fragmentLightSpace.w; // range [-1;+1]
vec3 fragmentLightSpaceNormalizedUV = fragmentLightSpaceNormalized * 0.5 + vec3(0.5, 0.5, 0.5); // range [ 0; 1]
// get shadow map's texture size:
ivec2 textureDimensions = textureSize(uTextureShadowMap, 0);
vec2 delta = vec2(textureDimensions.x, textureDimensions.y);
// get width of every texel:
vec2 textureStep = vec2(1.0 / textureDimensions.x, 1.0 / textureDimensions.y);
// get the UV coordinates of the texel center:
vec2 fragmentLightSpaceUVScaled = fragmentLightSpaceNormalizedUV.xy * delta;
vec2 texelCenterUV = floor(fragmentLightSpaceUVScaled) * textureStep + textureStep / 2;
// convert range for texel center in light space in range [-1;+1]:
vec2 texelCenterLightSpaceNormalized = 2.0 * texelCenterUV - vec2(1.0, 1.0);
// recreate light ray in world space:
vec4 recreatedVec4 = vec4(texelCenterLightSpaceNormalized.x, texelCenterLightSpaceNormalized.y, -uLightsNearFar.x, 1.0);
mat4 vpShadowMapInversed = inverse(uVPShadowMap);
vec4 texelCenterWorldSpace = vpShadowMapInversed * recreatedVec4;
vec3 lightRayNormalized = normalize(texelCenterWorldSpace.xyz - uLightsPositions.xyz);
// compute scene scale for epsilon computation:
vec4 frustum1 = vpShadowMapInversed * corners[0];
frustum1 = frustum1 / frustum1.w;
vec4 frustum2 = vpShadowMapInversed * corners[1];
frustum2 = frustum2 / frustum2.w;
float ln = uLightNearFar.x;
float lf = uLightNearFar.y;
// compute light ray intersection with fragment plane:
float dotLightRayfragmentNormal = dot(fragmentNormal, lightRayNormalized);
float d = dot(fragmentNormal, vFragmentWorldSpace);
float x = (d - dot(fragmentNormal, uLightsPositions.xyz)) / dot(fragmentNormal, lightRayNormalized);
vec4 intersectionWorldSpace = vec4(uLightsPositions.xyz + lightRayNormalized * x, 1.0);
// compute bias:
vec4 texelInLightSpace = uVPShadowMap * intersectionWorldSpace;
float intersectionDepthTexelCenterUV = (texelInLightSpace.z / texelInLightSpace.w) / 2.0 + 0.5;
float fragmentDepthLightSpaceUV = fragmentLightSpaceNormalizedUV.z;
float bias = intersectionDepthTexelCenterUV - fragmentDepthLightSpaceUV;
float depthCompressionResult = pow(lf - fragmentLightSpaceNormalizedUV.z * (lf - ln), 2.0) / (lf * ln * (lf - ln));
float epsilon = depthCompressionResult * length(frustum1.xyz - frustum2.xyz) * uK;
bias += epsilon;
vec4 shadowCoord = vShadowCoord;
shadowCoord.z -= bias;
float shadowValue = textureProj(uTextureShadowMap, shadowCoord);
return max(shadowValue, 0.0);
}
Please note that this is a very verbose method (you could optimise several steps, I know) to better explain what I did to make it work.
All my shadow casting lights use perspective projection.
I tested the results on the CPU side in a separate project (only c# with the math structs from the OpenTK package) and they seem reasonable. I used several light positions, texture sizes, etc. The bias values looked ok in all my tests. Of course, this is no proof, but I have a good feeling about this.
In the end:
The benefits were very small. The visual results are good (especially for shadow maps with >= 2048 samples per dimension) but I still had to tweak the offset value (uniform float uK in the fragment shader) for each of my scenes. I found values from 0.01 to 0.03 to deliver useable results.
I lost about 10% performance (fps-wise) compared to my previous approach (slope-scaled bias) and gained maybe 1% of visual fidelity when it comes to shadows (acne, peter panning). The 1% is not measured - only felt by me :-)
I wanted this approach to be the "one-solution-to-all-problems". But I guess, there is no "fire-and-forget" solution when it comes to shadow mapping ;-/
I am currently trying to draw a 2D grid on a single quad using only shaders. I am using SFML as the graphics library and sf::View to control the camera. So far I have been able to draw an anti-aliased multi level grid. The first level (blue) outlines a chunk and the second level (grey) outlines the tiles within a chunk.
I would now like to fade grid levels based on the distance from the camera. For example, the chunk grid should fade in as the camera zooms in. The same should be done for the tile grid after the chunk grid has been completely faded in.
I am not sure how this could be implemented as I am still new to OpenGL and GLSL. If anybody has any pointers on how this functionality can be implemented, please let me know.
Vertex Shader
#version 130
out vec2 texCoords;
void main() {
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
texCoords = (gl_TextureMatrix[0] * gl_MultiTexCoord0).xy;
}
Fragment Shader
#version 130
uniform vec2 chunkSize = vec2(64.0, 64.0);
uniform vec2 tileSize = vec2(16.0, 16.0);
uniform vec3 chunkBorderColor = vec3(0.0, 0.0, 1.0);
uniform vec3 tileBorderColor = vec3(0.5, 0.5, 0.5);
uniform bool drawGrid = true;
in vec2 texCoords;
void main() {
vec2 uv = texCoords.xy * chunkSize;
vec3 color = vec3(1.0, 1.0, 1.0);
if(drawGrid) {
float aa = length(fwidth(uv));
vec2 halfChunkSize = chunkSize / 2.0;
vec2 halfTileSize = tileSize / 2.0;
vec2 a = abs(mod(uv - halfChunkSize, chunkSize) - halfChunkSize);
vec2 b = abs(mod(uv - halfTileSize, tileSize) - halfTileSize);
color = mix(
color,
tileBorderColor,
smoothstep(aa, .0, min(b.x, b.y))
);
color = mix(
color,
chunkBorderColor,
smoothstep(aa, .0, min(a.x, a.y))
);
}
gl_FragColor.rgb = color;
gl_FragColor.a = 1.0;
}
You need to split your multiplication in the vertex shader to two parts:
// have a variable to be interpolated per fragment
out vec2 vertex_coordinate;
...
{
// this will store the coordinates of the vertex
// before its projected (i.e. its "world" coordinates)
vertex_coordinate = gl_ModelViewMatrix * gl_Vertex;
// get your projected vertex position as before
gl_Position = gl_ProjectionMatrix * vertex_coordinate;
...
}
Then in the fragment shader you change the color based on the world vertex coordinate and the camera position:
in vec2 vertex_coordinate;
// have to update this value, every time your camera changes its position
uniform vec2 camera_world_position = vec2(64.0, 64.0);
...
{
...
// calculate the distance from the fragment in world coordinates to the camera
float fade_factor = length(camera_world_position - vertex_coordinate);
// make it to be 1 near the camera and 0 if its more then 100 units.
fade_factor = clamp(1.0 - fade_factor / 100.0, 0.0, 1.0);
// update your final color with this factor
gl_FragColor.rgb = color * fade_factor;
...
}
The second way to do it is to use the projected coordinate's w. I personally prefer to calculate the distance in units of space. I did not test this code, it might have some trivial syntax errors, but if you understand the idea, you can apply it in any other way.
I'm trying to do point source directional lighting in OpenGL using my textbooks examples. I'm showing a rectangle centered at the origin, and doing the lighting computations in the shader. The rectangle appears, but it is black even when I try to put colored lights on it. Normals for the rectangle are all (0, 1.0, 0). I'm not doing any non-uniform scaling, so the regular model view matrix should also transform the normals.
I have code that sets the light parameters(as uniforms) and material parameters(also as uniforms) for the shader. There is no per vertex color information.
void InitMaterial()
{
color material_ambient = color(1.0, 0.0, 1.0);
color material_diffuse = color(1.0, 0.8, 0.0);
color material_specular = color(1.0, 0.8, 0.0);
float material_shininess = 100.0;
// set uniforms for current program
glUniform3fv(glGetUniformLocation(Programs[lightingType], "materialAmbient"), 1, material_ambient);
glUniform3fv(glGetUniformLocation(Programs[lightingType], "materialDiffuse"), 1, material_diffuse);
glUniform3fv(glGetUniformLocation(Programs[lightingType], "materialSpecular"), 1, material_specular);
glUniform1f(glGetUniformLocation(Programs[lightingType], "shininess"), material_shininess);
}
For the lights:
void InitLight()
{
// need light direction and light position
point4 light_position = point4(0.0, 0.0, -1.0, 0.0);
color light_ambient = color(0.2, 0.2, 0.2);
color light_diffuse = color(1.0, 1.0, 1.0);
color light_specular = color(1.0, 1.0, 1.0);
glUniform3fv(glGetUniformLocation(Programs[lightingType], "lightPosition"), 1, light_position);
glUniform3fv(glGetUniformLocation(Programs[lightingType], "lightAmbient"), 1, light_ambient);
glUniform3fv(glGetUniformLocation(Programs[lightingType], "lightDiffuse"), 1, light_diffuse);
glUniform3fv(glGetUniformLocation(Programs[lightingType], "lightSpecular"), 1, light_specular);
}
The fragment shader is a simple pass through shader that sets the color to the one input from the vertex shader. Here is the vertex shader :
#version 150
in vec4 vPosition;
in vec3 vNormal;
out vec4 color;
uniform vec4 materialAmbient, materialDiffuse, materialSpecular;
uniform vec4 lightAmbient, lightDiffuse, lightSpecular;
uniform float shininess;
uniform mat4 modelView;
uniform vec4 lightPosition;
uniform mat4 projection;
void main()
{
// Transform vertex position into eye coordinates
vec3 pos = (modelView * vPosition).xyz;
vec3 L = normalize(lightPosition.xyz - pos);
vec3 E = normalize(-pos);
vec3 H = normalize(L + E);
// Transform vertex normal into eye coordinates
vec3 N = normalize(modelView * vec4(vNormal, 0.0)).xyz;
// Compute terms in the illumination equation
vec4 ambient = materialAmbient * lightAmbient;
float Kd = max(dot(L, N), 0.0);
vec4 diffuse = Kd * materialDiffuse * lightDiffuse;
float Ks = pow(max(dot(N, H), 0.0), shininess);
vec4 specular = Ks * materialSpecular * lightSpecular;
if(dot(L, N) < 0.0) specular = vec4(0.0, 0.0, 0.0, 1.0);
gl_Position = projection * modelView * vPosition;
color = ambient + diffuse + specular;
color.a = 1.0;
}
Ok, it's working now. The solution was to replace glUniform3fv with glUniform4fv, I guess because the glsl counterpart is a vec4 instead of a vec3. I thought that it would be able to recognize this and simply add a 1.0 to the end, but no.
I know there are couple of threads on the net about the same problem but I haven't got help from these because my implementation is different.
I'm rendering colors, normals and depth in view space into textures. In second I bind textures with fullscreen quad and calculate lighting. Directional light seems to work fine but point lights are moving with camera.
I share corresponding shader code:
Lighting step vertex shader
in vec2 inVertex;
in vec2 inTexCoord;
out vec2 texCoord;
void main() {
gl_Position = vec4(inVertex, 0, 1.0);
texCoord = inTexCoord;
}
Lighting step fragment shader
float depth = texture2D(depthBuffer, texCoord).r;
vec3 normal = texture2D(normalBuffer, texCoord).rgb;
vec3 color = texture2D(colorBuffer, texCoord).rgb;
vec3 position;
position.z = -nearPlane / (farPlane - (depth * (farPlane - nearPlane))) * farPlane;
position.x = ((gl_FragCoord.x / width) * 2.0) - 1.0;
position.y = (((gl_FragCoord.y / height) * 2.0) - 1.0) * (height / width);
position.x *= -position.z;
position.y *= -position.z;
normal = normalize(normal);
vec3 lightVector = lightPosition.xyz - position;
float dist = length(lightVector);
lightVector = normalize(lightVector);
float nDotL = max(dot(normal, lightVector), 0.0);
vec3 halfVector = normalize(lightVector - position);
float nDotHV = max(dot(normal, halfVector), 0.0);
vec3 lightColor = lightAmbient;
vec3 diffuse = lightDiffuse * nDotL;
vec3 specular = lightSpecular * pow(nDotHV, 1.0) * nDotL;
lightColor += diffuse + specular;
float attenuation = clamp(1.0 / (lightAttenuation.x + lightAttenuation.y * dist + lightAttenuation.z * dist * dist), 0.0, 1.0);
gl_FragColor = vec4(vec3(color * lightColor * attenuation), 1.0);
I send light attribues to shader as uniforms:
shader->set("lightPosition", (viewMatrix * modelMatrix).inverse().transpose() * vec4(0, 10, 0, 1.0));
viewmatrix is camera matrix and modelmatrix is just identity here.
Why point lights are translating with camera not with models?
Any suggestions are welcome!
In addition to Nobody's comment that all the vectors you compute with have to be normalized, you have to make sure that they all are in the same space. If you use the view space position as view vector, the normal vector has to be in view space, too (has to be transformed by the inverse transpose modelview matrix before getting written into the G-buffer in the first pass). And the light vector has to be in view space, too. Therefore you have to transform the light position by the view matrix (or the modelview matrix, if the light position is not in world space), instead of its inverse transpose.
shader->set("lightPosition", viewMatrix * modelMatrix * vec4(0, 10, 0, 1.0));
EDIT: For the directional light the inverse transpose is actually a good idea if you specify the light direction as the direction to the light (like vec4(0, 1, 0, 0) for a light pointing in the -z direction).