I'm trying to render dynamic shadows casted from the point light on some cubes. However, like there are no shadows at all on the cubes, particularly on the silver cube:
In my program I use Qt OpenGL API together with the native OpenGL one. I do two passes: 1. create a depth texture (glGenTextures()), create a framebuffer (glGenFramebuffers()) and attach the texture to that (glFramebufferTexture()). Then render all cubes to that framebuffer. 2. Render as usually all scene to the screen with using that texture in the shaders. I use QOpenGLShaderProgram class for creating shader programs.
How I calculate the shadows in the fragment shader:
float isFragInShadow(vec3 fragPos)
{
vec3 relFragPos = fragPos - lightPos;
float closestDepth = texture(depth_map, relFragPos).r;
closestDepth *= far_plane;
float currentDepth = length(relFragPos);
float bias = max(0.05 * (1.0 - dot(normalize(-relFragPos), Normal)), 0.005);
float shadow = currentDepth - bias > closestDepth ? 1.0 : 0.0;
return shadow;
}
...
float isShadow = isFragInShadow(FragPos);
vec3 resColor = ambient * obj_material.ambient + (1.0 - isShadow) * (diffuse * obj_material.diffuse + specular * obj_material.specular);
I also tried to render to the screen the depth texture and the result is following:
https://i.stack.imgur.com/qM9a1.png
How I render:
vec3 lightToFrag = FragPos - lightPos;
float closestDepth = texture(depth_map, lightToFrag).r;
Color = vec4(vec3(closestDepth), 1.0);
Why can I not see shadows in the scene?
Related
As you can tell from the title, I'm trying to create the mirror reflection while using deferred rendering and ambient occlusion. For ambient occlusion I'm specifically using the ssao algorithm.
To create the mirror I use the basic idea of reflecting all the models to the other side of the mirror and then rendering only the parts visible through the mirror.
Using deferred rendering I decided to do this during the creation of the gBuffer. In order to achieve correct lighting of the reflected objects, I made sure that the positions and normals of the reflected objects in the gBuffer are the same with their 'non reflected' version. That way, both the actual models and their images will receive the same lighting.
My problem is now with the ssao algorithm. It seems that the reflected objects are calculated to be highly occluded and this results in black areas which you can see in the mirror:
I've noticed that these black areas appear only in places that are not in my view. Things that I can see without the mirror have no unexpected black spots on them.
Note that the data in the gBuffer are all in view space. So there must be a connection there. Maybe the random samples used during ssao or their normals are not calculated correctly.
So , this is the fragment shader for the ambient occlusion :
void main()
{
vec3 fragPos = texture(gPosition, TexCoords).xyz;
vec3 normal = texture(gNormal, TexCoords).rgb;
vec3 randomVec = texture(texNoise, TexCoords * noiseScale).xyz;
vec3 tangent = normalize(randomVec - normal * dot(randomVec, normal));
vec3 bitangent = cross(normal, tangent);
mat3 TBN = mat3(tangent, bitangent, normal);
float occlusion = 0.0;
float kernelSize=64;
for(int i = 0; i < kernelSize; ++i)
{
// get sample position
vec3 sample = TBN * samples[i]; // From tangent to view-space
sample = fragPos + sample * radius;
vec4 offset = vec4(sample, 1.0);
offset = projection * offset; // from view to clip-space
offset.xyz /= offset.w; // perspective divide
offset.xyz = offset.xyz * 0.5 + 0.5;
float sampleDepth = texture(gPosition, offset.xy).z;
float rangeCheck = smoothstep(0.0, 1.0, radius / abs(fragPos.z -
sampleDepth));
occlusion += (sampleDepth >= sample.z + bias ? 1.0 : 0.0) * rangeCheck;
}
occlusion = 1.0 - (occlusion / kernelSize);
//FragColor = vec4(1,1,1,1);
occl=vec4(occlusion,occlusion,occlusion,1);
}
Any ideas as to why these black areas appear or suggestions to correct them?
I could just ignore the ambient occlusion in the reflection but I'm not happy with that.
Maybe, if the ambient occlusion shader used the positions and normals of the reflected objects there would be no problem. But then I'll get into trouble of saving more things in the buffer so I gave up that idea for now.
I'm implementing directional shadow mapping in deferred shading.
First, I render a depth map from light view (orthogonal projection).
Result:
I intend to do VSM so above buffer is R32G32 storing depth and depth * depth.
Then for a full-screen shading pass for shadow (after a lighting pass), I write the following pixel shader:
#version 330
in vec2 texCoord; // screen coordinate
out vec3 fragColor; // output color on the screen
uniform mat4 lightViewProjMat; // lightView * lightProjection (ortho)
uniform sampler2D sceneTexture; // lit scene with one directional light
uniform sampler2D shadowMapTexture;
uniform sampler2D scenePosTexture; // store fragment's 3D position
void main() {
vec3 fragPos = texture(scenePosTexture, texCoord).xyz; // get 3D position of pixel
vec4 fragPosLightSpace = lightViewProjMat * vec4(fragPos, 1.0); // project it to light-space view: lightView * lightProjection
// projective texture mapping
vec3 coord = fragPosLightSpace.xyz / fragPosLightSpace.w;
coord = coord * 0.5 + 0.5;
float lightViewDepth; // depth value in the depth buffer - the maximum depth that light can see
float currentDepth; // depth of screen pixel, maybe not visible to the light, that's how shadow mapping works
vec2 moments; // depth and depth * depth for later variance shadow mapping
moments = texture(shadowMapTexture, coord.xy).xy;
lightViewDepth = moments.x;
currentDepth = fragPosLightSpace.z;
float lit_factor = 0;
if (currentDepth <= lightViewDepth)
lit_factor = 1; // pixel is visible to the light
else
lit_factor = 0; // the light doesn't see this pixel
// I don't do VSM yet, just want to see black or full-color pixels
fragColor = texture(sceneTexture, texCoord).rgb * lit_factor;
}
The rendered result is a black screen, but if I hard coded the lit_factor to be 1, result is:
Basically that's how the sceneTexture looks like.
So I think either my depth value is wrong, which is unlikely, or my projection (light space projection in above shader / projective texture mapping) is wrong. Could you validate it for me?
My shadow map generation code is:
// vertex shader
#version 330 compatibility
uniform mat4 lightViewMat; // lightView
uniform mat4 lightViewProjMat; // lightView * lightProj
in vec3 in_vertex;
out float depth;
void main() {
vec4 vert = vec4(in_vertex, 1.0);
depth = (lightViewMat * vert).z / (500 * 0.2); // 500 is far value, this line tunes the depth precision
gl_Position = lightViewProjMat * vert;
}
// pixel shader
#version 330
in float depth;
out vec2 out_depth;
void main() {
out_depth = vec2(depth, depth * depth);
}
The z component of the fragment shader built in variable gl_FragCoord contains the depth value in range [0.0, 1.0]. This is the value which you shoud store to the depth map:
out_depth = vec2(gl_FragCoord.z, depth * depth);
After the calculation
vec3 fragPos = texture(scenePosTexture, texCoord).xyz; // get 3D position of pixel
vec4 fragPosLightSpace = lightViewProjMat * vec4(fragPos, 1.0); // project it to light-space view: lightView * lightProjection
vec3 ndc_coord = fragPosLightSpace.xyz / fragPosLightSpace.w;
the variable ndc_coord contains a normalized device coordinate, where all components are in range [-1.0, 1.0].
The z component of the normalized device coordiante can be conveted to the depth value (if the depth range is [0.0, 1.0]), by
float currentDepth = ndc_coord.z * 0.5 + 0.5;
This value can be compared to the value from the depth map, because currentDepth and lightViewDepth are calcualted by the same view matrix and projection matrix:
moments = texture(shadowMapTexture, coord.xy).xy;
lightViewDepth = moments.x;
if (currentDepth <= lightViewDepth)
lit_factor = 1; // pixel is visible to the light
else
lit_factor = 0; // the light doesn't see this pixel
This is the depth you store in the shadow map:
depth = (lightViewMat * vert).z / (500 * 0.2);
This is the depth you compare the read back value to:
vec4 fragPosLightSpace = lightViewProjMat * vec4(fragPos, 1.0);
currentDepth = fragPosLightSpace.z;
If fragPos is in world space then I assume lightViewMat * vert == fragPos. You are compressing depth by dividing by 500 * 0.2, but that does not equal to fragPosLightSpace.z.
Hint: Write out the value of currentDepth in one channel and the value from the shadow map in another channel, you can then compare them visually or in RenderDoc or similar.
I have a full screen quad with two textures.
I want to blend two textures in arbitrary shape according to user selection.
For example, the quad at first is 100% texture0 while texture1 is transparent.
If the user selects a region, for example a circle, by dragging the mouse on the quad, then
circle region should display both texture0 and texture1 as translucent.
The region not enclosed by the circle should still be texture0.
Please see example image, textures are simplified as colors.
For now, I have achieved blending two textures on the quad, but the blending region can only be vertical slices because I use the step() function.
My frag shader:
uniform sampler2D Texture0;
uniform sampler2D Texture1;
uniform float alpha;
uniform float leftBlend;
uniform float rightBlend;
varying vec4 oColor;
varying vec2 oTexCoord;
void main()
{
vec4 first_sample = texture2D(Texture0, oTexCoord);
vec4 second_sample = texture2D(Texture1, oTexCoord);
float stepLeft = step(leftBlend, oTexCoord.x);
float stepRight = step(rightBlend, 1.0 - oTexCoord.x);
if(stepLeft == 1.0 && stepRight == 1.0)
gl_FragColor = oColor * first_sample;
else
gl_FragColor = oColor * (first_sample * alpha + second_sample * (1.0-alpha));
if (gl_FragColor.a < 0.4)
discard;
}
To achieve arbitrary shape, I assume I need to create a alpha mask texture which is the same size as texture0 and texture 1?
Then I pass that texture to frag shader to check values, if value is 0 then texture0, if value is 1 then blend texture0 and texture1.
Is my approach correct? Can you point me to any samples?
I want effect such as OpenGL - mask with multiple textures
but I want to create mask texture in my program dynamically, and I want to implement blending in GLSL
I have got blending working with mask texture of black and white
uniform sampler2D TextureMask;
vec4 mask_sample = texture2D(TextureMask, oTexCoord);
if(mask_sample.r == 0)
gl_FragColor = first_sample;
else
gl_FragColor = (first_sample * alpha + second_sample * (1.0-alpha));
now mask texture is loaded statically from a image on disk, now I just need to create mask texture dynamically in opengl
Here's one approach and sample.
Create a boolean test for whether you want to blend.
In my sample, I use an equation for a circle centered on the screen.
Then blend (i blended by weighted addition of the 2 colors).
(NOTE: i didn't have texture coords to work with in this sample, so i used the screen resolution to determine the circle position).
uniform vec2 resolution;
void main( void ) {
vec2 position = gl_FragCoord.xy / resolution;
// test if we're "in" or "out" of the blended region
// lets use a circle of radius 0.5, but you can make this mroe complex and/or pass this value in from the user
bool isBlended = (position.x - 0.5) * (position.x - 0.5) +
(position.y - 0.5) * (position.y - 0.5) > 0.25;
vec4 color1 = vec4(1,0,0,1); // this could come from texture 1
vec4 color2 = vec4(0,1,0,1); // this could come from texture 2
vec4 finalColor;
if (isBlended)
{
// blend
finalColor = color1 * 0.5 + color2 * 0.5;
}
else
{
// don't blend
finalColor = color1;
}
gl_FragColor = finalColor;
}
See the sample running here: http://glsl.heroku.com/e#18231.0
(tried to post my sample image but i don't have enough rep) sorry :/
Update:
Here's another sample using mouse position to determine the position of the blended area.
To run, paste the code in this sandbox site: https://www.shadertoy.com/new
This one should work on objects of any shape, as long as you have the mouse data setup correct.
void main(void)
{
vec2 position = gl_FragCoord.xy;
// test if we're "in" or "out" of the blended region
// lets use a circle of radius 10px, but you can make this mroe complex and/or pass this value in from the user
float diffX = position.x - iMouse.x;
float diffY = position.y - iMouse.y;
bool isBlended = (diffX * diffX) + (diffY * diffY) < 100.0;
vec4 color1 = vec4(1,0,0,1); // this could come from texture 1
vec4 color2 = vec4(0,1,0,1); // this could come from texture 2
vec4 finalColor;
if (isBlended)
{
// blend
finalColor = color1 * 0.5 + color2 * 0.5;
}
else
{
// don't blend
finalColor = color1;
}
gl_FragColor = finalColor;
}
Im currently implementing a deferred rendering pipeline and im stuck with shadow mapping.
Ive already implemented it succesfully into a forward pipeline.
The Steps i do are:
Get Position in Light View
Convert to light view clip space
Get shadow tex coords with * 0.5 + 0.5;
check depth
Edit: Updated code with new result image:
float checkShadow(vec3 position) {
// get position in light view
mat4 invView = inverse(cameraView);
vec4 pEyeDir = sunBias * sunProjection * sunView * invView * vec4(position, 1.0);
// light view clip space
pEyeDir = pEyeDir / pEyeDir.w;
// get uv coordinates
vec2 sTexCoords = pEyeDir.xy * 0.5 + 0.5;
float bias = 0.0001;
float depth = texture(sunDepthTex, sTexCoords).r - bias;
float shadow = 1.0f;
if(pEyeDir.z * 0.5 + 0.5 > depth)
{
shadow = 0.3f;
}
return shadow;
}
here some variables important for the code above:
vec3 position = texture(positionTex, uv).rgb;
Also i get a dark background( meshes stay the same) at some camera positions, only happens when i multiply the shadow value to the final color.
As requested, here are the position and sun depth texture:
Ok i fixed it. The problem was because the light depth had a different size than the gbuffer textures.
To use different texture sizes i had to normalize them with
coords = (coords / imageSize ) * windowSize;
I have a radial blur shader in GLSL, which takes a texture, applies a radial blur to it and renders the result to the screen. This works very well, so far.
The problem is, that this applies the radial blur to the first texture in the scene. But what I actually want to do, is to apply this blur to the whole scene.
What is the best way to achieve this functionality? Can I do this with only shaders, or do I have to render the scene to a texture first (in OpenGL) and then pass this texture to the shader for further processing?
// Vertex shader
varying vec2 uv;
void main(void)
{
gl_Position = vec4( gl_Vertex.xy, 0.0, 1.0 );
gl_Position = sign( gl_Position );
uv = (vec2( gl_Position.x, - gl_Position.y ) + vec2(1.0) ) / vec2(2.0);
}
// Fragment shader
uniform sampler2D tex;
varying vec2 uv;
const float sampleDist = 1.0;
const float sampleStrength = 2.2;
void main(void)
{
float samples[10];
samples[0] = -0.08;
samples[1] = -0.05;
samples[2] = -0.03;
samples[3] = -0.02;
samples[4] = -0.01;
samples[5] = 0.01;
samples[6] = 0.02;
samples[7] = 0.03;
samples[8] = 0.05;
samples[9] = 0.08;
vec2 dir = 0.5 - uv;
float dist = sqrt(dir.x*dir.x + dir.y*dir.y);
dir = dir/dist;
vec4 color = texture2D(tex,uv);
vec4 sum = color;
for (int i = 0; i < 10; i++)
sum += texture2D( tex, uv + dir * samples[i] * sampleDist );
sum *= 1.0/11.0;
float t = dist * sampleStrength;
t = clamp( t ,0.0,1.0);
gl_FragColor = mix( color, sum, t );
}
This basically is called "post-processing" because you're applying an effect (here: radial blur) to the whole scene after it's rendered.
So yes, you're right: the good way for post-processing is to:
create a screen-sized NPOT texture (GL_TEXTURE_RECTANGLE),
create a FBO, attach the texture to it
set this FBO to active, render the scene
disable the FBO, draw a full-screen quad with the FBO's texture.
As for the "why", the reason is simple: the scene is rendered in parallel (the fragment shader is executed independently for many pixels). In order to do radial blur for pixel (x,y), you first need to know the pre-blur pixel values of the surrounding pixels. And those are not available in the first pass, because they are only being rendered in the meantime.
Therefore, you must apply the radial blur only after the whole scene is rendered and fragment shader for fragment (x,y) is able to read any pixel from the scene. This is the reason why you need 2 rendering stages for that.