I`m trying to implement shadowmaps in Java/OpenGL with GLSL.
It seems to be impossible to create shadow maps with Java/OpenGL, there is almost no working example with perspective projection.
What I think is, that the matrix calculation isnt working well.
Here is my shadow result (camera view/proj = shadow view/proj):
And here I have mapped the linearized depth buffer on a rectangle, its a little bit rotated:
It seems like the depth buffer is flipped, because on every surface I have mapped it, it is x or/and y flipped. But maybe its just a UV bug.
So the major question is: Can you give me a hint what may happened?
Here are some code snippets:
Final Shader: Depth & Shadow calculation (uSamplerShadow is sampler2D)
float shadowValue=0.0;
vec4 lightVertexPosition2=vShadowCoord;
lightVertexPosition2/=lightVertexPosition2.w;
for(float x=-0.001;x<=0.001;x+=0.0005)
for(float y=-0.001;y<=0.001;y+=0.0005)
{
if(texture2D(uSamplerShadow,lightVertexPosition2.xy+vec2(x,y)).r>=lightVertexPosition2.z)
shadowValue+=1.0;
}
shadowValue/=16.0;
float f = 100.0;
float n = 0.1;
float z = (2 * n) / (f + n - texture2D(uSamplerShadow,vTexCoords).x * (f - n));
outColor = vec4(vec3(z) , 1.0);
Final Shader: Shadow coord calulation: (No bias matrix implemented yet)
vShadowCoord = uProjectionMatrix * uShadowViewMatrix * uWorldMatrix * vec4(aPosition,1.0);
Depth Shader
fragmentdepth = gl_FragCoord.z;
You can check my texture properties too, but I have already tried all combinations I found in on google :)
shadowTextureProperties.setMagFilter(EnumTextureFilter.NEAREST);
shadowTextureProperties.setMinFilter(EnumTextureFilter.NEAREST);
shadowTextureProperties.setWrapS(EnumTextureWrap.CLAMP_TO_EDGE);
shadowTextureProperties.setWrapT(EnumTextureWrap.CLAMP_TO_EDGE);
shadowTextureProperties.setInternalColorFormat(EnumTextureColorFormat.DEPTH_COMPONENT16);
shadowTextureProperties.setSrcColorFormat(EnumTextureColorFormat.DEPTH_COMPONENT);
shadowTextureProperties.setValueFormat(EnumValueFormat.FLOAT);
shadowTextureProperties.setPname(new int[]{GL14.GL_TEXTURE_COMPARE_MODE, GL14.GL_TEXTURE_COMPARE_FUNC});
shadowTextureProperties.setParam(new int[]{GL11.GL_NONE, GL11.GL_LEQUAL});
First thing:
shadowTextureProperties.setPname(new int[]{GL14.GL_TEXTURE_COMPARE_MODE, GL14.GL_TEXTURE_COMPARE_FUNC});
GL_TEXTURE_COMPARE_FUNC is not a valid parameter for GL_TEXTURE_COMPARE_MODE. According to the reference, only GL_NONE and GL_COMPARE_R_TO_TEXTURE are allowed.
GL_COMPARE_R_TO_TEXTURE
One has to use a shadow sampler (sampler2DShadow) and the corresponding texture overload:
float texture( sampler2DShadow sampler, vec3 P)
Here, the sampler is sampled at location P.xy and the read value is compared to P.z. The result of this operation is a lightning factor (0.0 when completely shadowed, 1.0 when no shadow).
GL_NONE
When you want to do the comparison yourself, then you have to set the GL_COMPARE_R_TO_TEXTURE to GL_NONE
Related
I'm trying to implement Normal Mapping, using a simple cube that i created. I followed this tutorial https://learnopengl.com/Advanced-Lighting/Normal-Mapping but i can't really get how normal mapping should be done when drawing 3d objects, since the tutorial is using a 2d object.
In particular, my cube seems almost correctly lighted but there's something i think it's not working how it should be. I'm using a geometry shader that will output green vector normals and red vector tangents, to help me out. Here i post three screenshot of my work.
Directly lighted
Side lighted
Here i actually tried calculating my normals and tangents in a different way. (quite wrong)
In the first image i calculate my cube normals and tangents one face at a time. This seems to work for the face, but if i rotate my cube i think the lighting on the adiacent face is wrong. As you can see in the second image, it's not totally absent.
In the third image, i tried summing all normals and tangents per vertex, as i think it should be done, but the result seems quite wrong, since there is too little lighting.
In the end, my question is how i should calculate normals and tangents.
Should i consider per face calculations or sum vectors per vertex across all relative faces, or else?
EDIT --
I'm passing normal and tangent to the vertex shader and setting up my TBN matrix. But as you can see in the first image, drawing face by face my cube, it seems that those faces adjacent to the one i'm looking directly (that is well lighted) are not correctly lighted and i don't know why. I thought that i wasn't correctly calculating my 'per face' normal and tangent. I thought that calculating some normal and tangent that takes count of the object in general, could be the right way.
If it's right to calculate normal and tangent as visible in the second image (green normal, red tangent) to set up the TBN matrix, why does the right face seems not well lighted?
EDIT 2 --
Vertex shader:
void main(){
texture_coordinates = textcoord;
fragment_position = vec3(model * vec4(position,1.0));
mat3 normalMatrix = transpose(inverse(mat3(model)));
vec3 T = normalize(normalMatrix * tangent);
vec3 N = normalize(normalMatrix * normal);
T = normalize(T - dot(T, N) * N);
vec3 B = cross(N, T);
mat3 TBN = transpose(mat3(T,B,N));
view_position = TBN * viewPos; // camera position
light_position = TBN * lightPos; // light position
fragment_position = TBN * fragment_position;
gl_Position = projection * view * model * vec4(position,1.0);
}
In the VS i set up my TBN matrix and i transform all light, fragment and view vectors to tangent space; doing so i won't have to do any other calculation in the fragment shader.
Fragment shader:
void main() {
vec3 Normal = texture(TextSamplerNormals,texture_coordinates).rgb; // extract normal
Normal = normalize(Normal * 2.0 - 1.0); // correct range
material_color = texture2D(TextSampler,texture_coordinates.st); // diffuse map
vec3 I_amb = AmbientLight.color * AmbientLight.intensity;
vec3 lightDir = normalize(light_position - fragment_position);
vec3 I_dif = vec3(0,0,0);
float DiffusiveFactor = max(dot(lightDir,Normal),0.0);
vec3 I_spe = vec3(0,0,0);
float SpecularFactor = 0.0;
if (DiffusiveFactor>0.0) {
I_dif = DiffusiveLight.color * DiffusiveLight.intensity * DiffusiveFactor;
vec3 vertex_to_eye = normalize(view_position - fragment_position);
vec3 light_reflect = reflect(-lightDir,Normal);
light_reflect = normalize(light_reflect);
SpecularFactor = pow(max(dot(vertex_to_eye,light_reflect),0.0),SpecularLight.power);
if (SpecularFactor>0.0) {
I_spe = DiffusiveLight.color * SpecularLight.intensity * SpecularFactor;
}
}
color = vec4(material_color.rgb * (I_amb + I_dif + I_spe),material_color.a);
}
Handling discontinuity vs continuity
You are thinking about this the wrong way.
Depending on the use case your normal map may be continous or discontinous. For example in your cube, imagine if each face had a different surface type, then the normals would be different depending on which face you are currently in.
Which normal you use is determined by the texture itself and not by any blending in the fragment.
The actual algorithm is
Load rgb values of normal
Convert to -1 to 1 range
Rotate by the model matrix
Use new value in shading calculations
If you want continous normals, then you need to make sure that the charts in the texture space that you use obey that the limits of the texture coordinates agree.
Mathematically that means that if U and V are regions of R^2 that map to the normal field N of your Shape then if the function of the mapping is f it should be that:
If lim S(x_1, x_2) = lim S(y_1, y_2) where {x1,x2} \subset U and {y_1, y_2} \subset V then lim f(x_1, x_2) = lim f(y_1, y_2).
In plain English, if the cooridnates in your chart map to positions that are close in the shape, then the normals they map to should also be close in the normal space.
TL;DR do not belnd in the fragment. This is something that should be done by the normal map itself when its baked, not'by you when rendering.
Handling the tangent space
You have 2 options. Option n1, you pass the tangent T and the normal N to the shader. In which case the binormal B is T X N and the basis {T, N, B} gives you the true space where normals need to be expressed.
Assume that in tangent space, x is side, y is forward z is up. Your transformed normal becomes (xB, yT, zN).
If you do not pass the tangent, you must first create a random vector that is orthogonal to the normal, then use this as the tangent.
(Note N is the model normal, where (x,y,z) is the normal map normal)
Does anyone know why 'depth' (vertShader) differs from 'gl_FragCoord.z' (rendered from opengl)? Especially with decreasing z the difference becomes higher. Is it possible that 'depth' is at higher z values more precise?
.vsh
out float depth;
void main (void) {
vec4 pos = mvpMatrix * vertex;
depth = ((pos.z / pos.w) + 1.0) * 0.5;
gl_Position = pos;
}
.fsh
in float depth;
void main(void) {
gl_FragDepth = depth;// or gl_FragCoord.z;
}
There are a couple of issues with your approach, with the main points are:
gl_FragCoord.z is hyperbolically distorted window space z value. However, the hyperoblical z/w value for each vertex is just linearily interpolated in screen space for each framgent. But when you use a varying out float depth = (pos.z / pos.w), the GL will do a perspective-corrected interpolation which is non-linear. You could fix this by using flat out float depth.
(pos.z/pos.w) doesn't even make sense. Think about it: if the point lies in a plane where the camera is, you'll get pos.w=0, and no valid result. gl_FragCoord.z does not have this issue because the clipping is done before the divide, and it will do the divide for a new vertex which lies on the near plane, and which you'll never going to see (there's no vertex shader invocation for that).
The issue is also present when points lie behind the camera, they will end up mirrored in front of the camera. If you have a primitive where vertices lie on both sides of the camera, you will get complete bullshit as your interpolated depth value, no matter which interpolation method you chose.
I've gotten shadows working properly for my Directional Lights, but I'm a little stumped when it comes to Point Lights. My idea is to use a cube map to render the depth from all six sides surrounding the light. So far, that's all working and good. I have verified this step by rendering each face of my cube to a 2D image, and it appears to be correct.
Now I'm trying to get the shadows to show up in the world. To do so, I am using GLSL's samplerCubeShadow data type. With it, I do:
vec3 lightToFrag = light.position - fragPos
float lenLightToFrag = length(lightToFrag)
vec3 normLightToFrag = normalize(lightToFrag)
float shadow = texture(depthTexture, vec4(normLightToFrag, lightToFrag))
I've tried a number of configurations, and this always renders my scene in black. Any ideas? My fragPos is just the model matrix times the vertex position. Should I be applying the light's model-view matrix to it? Or, similarly, should I be applying the world's model-view matrix to the light? Any feedback is really appreciated!
Assuming you are storing depth values in cubemap;
AFAIK cubemap is an AABB in world space, so you need to do calculations in world space. In your case light.position and fragPos must be in world space, or provide alternative variables/members if you use these names in view space in somewhere else e.g. per-fragment light calculations
Also you need to convert lightToFrag to depth value before pass to texture.
This answer shows how to convert lightToFrag to depth value: Omnidirectional shadow mapping with depth cubemap
Here my implementation (I removed #ifdef SHAD_CUBE because others use same name):
uniform samplerCubeShadow uShadMap;
uniform vec2 uFarNear;
float depthValue(const in vec3 v) {
vec3 absv = abs(v);
float z = max(absv.x, max(absv.y, absv.z));
return uFarNear.x + uFarNear.y / z;
}
float shadowCoef() {
vec3 L;
float d;
L = vPosWS - light.position_ws;
d = depthValue(L);
return texture(uShadMap, vec4(L, d));
}
This may require uniform model matrix if you only have ModelViewProjection (MVP)
Here how to calculate uNearFar at client side:
float n, f, nfsub, nf[2];
n = sm->near;
f = sm->far;
nfsub = f - n;
nf[0] = (f + n) / nfsub * 0.5f + 0.5f;
nf[1] =-(f * n) / nfsub;
glUniform2f(gkUniformLoc(prog, "uFarNear"), nf[0], nf[1]);
this is just optimization but you don't have to use this and follow the link which I mentioned before.
You may need bias value, related answer uses bias but I'm not sure how to apply it to cubemap correctly. I'm not sure d -+ 0.0001 is correct way or not.
If you want to store world distances in cubemap then this tutorial seems god one: https://learnopengl.com/Advanced-Lighting/Shadows/Point-Shadows
I'm attempting to implement shadow mapping into my deferred rendering pipeline, but I'm running into a few issues actually generating the shadow map, then shadowing the pixels – pixels that I believe should be shadowed simply aren't.
I have a single directional light, which is the 'sun' in my engine. I have deferred rendering set up for lighting, which works properly thus far. I render the scene again into a depth-only FBO for the shadow map, using the following code to generate the view matrix:
glm::vec3 position = r->getCamera()->getCameraPosition(); // position of level camera
glm::vec3 lightDir = this->sun->getDirection(); // sun direction vector
glm::mat4 depthProjectionMatrix = glm::ortho<float>(-10,10,-10,10,-10,20); // ortho projection
glm::mat4 depthViewMatrix = glm::lookAt(position + (lightDir * 20.f / 2.f), -lightDir, glm::vec3(0,1,0));
glm::mat4 lightSpaceMatrix = depthProjectionMatrix * depthViewMatrix;
Then, in my lighting shader, I use the following code to determine whether a pixel is in shadow or not:
// lightSpaceMatrix is the same as above, FragWorldPos is world position of the texekl
vec4 FragPosLightSpace = lightSpaceMatrix * vec4(FragWorldPos, 1.0f);
// multiply non-ambient light values by ShadowCalculation(FragPosLightSpace)
// ... do more stuff ...
float ShadowCalculation(vec4 fragPosLightSpace) {
// perform perspective divide
vec3 projCoords = fragPosLightSpace.xyz / fragPosLightSpace.w;
// vec3 projCoords = fragPosLightSpace.xyz;
// Transform to [0,1] range
projCoords = projCoords * 0.5 + 0.5;
// Get closest depth value from light's perspective (using [0,1] range fragPosLight as coords)
float closestDepth = texture(gSunShadowMap, projCoords.xy).r;
// Get depth of current fragment from light's perspective
float currentDepth = projCoords.z;
// Check whether current frag pos is in shadow
float bias = 0.005;
float shadow = (currentDepth - bias) > closestDepth ? 1.0 : 0.0;
// Ensure that Z value is no larger than 1
if(projCoords.z > 1.0) {
shadow = 0.0;
}
return shadow;
}
However, that doesn't really get me what I'm after. Here's a screenshot of the output after shadowing, as well as the shadow map half-assedly converted to an image in Photoshop:
Render output
Shadow Map
Since the directional light is the only light in my shader, it seems that the shadow map is being rendered pretty close to correctly, since the perspective/direction roughly match. However, what I don't understand is why none of the teapots actually end up casting a shadow on the others.
I'd appreciate any pointers on what I might be doing wrong. I think that my issue lies either in the calculation of that light space matrix (I'm not sure how to properly calculate that, given a moving camera, such that the stuff that's in view will be updated,) or in the way I determine whether the texel the deferred renderer is shading is in shadow or not. (FWIW, I determine the world position from the depth buffer, but I've proven that this calculation is working correctly.)
Thanks for any help.
Debugging shadow problems can be tricky. Lets start with a few points:
If you look at your render closely, you will actually see a shadow on one of the pots in the top left corner.
Try rotating your sun, this usually helps to see if there are any problems with the light transform matrix. From your output, it seems the sun is very horizontal and might not cast shadows on this setup. (another angle might show more shadows)
It appears as though you are calculating the matrix correctly, but try shrinking your maximum depth in glm::ortho(-10,10,-10,10,-10,20) to tightly fit your scene. If the depth is too large, you will lose precision and shadow will have artifacts.
To visualize where the problem is coming from further, try outputing the result from your shadow map lookup from here:
closestDepth = texture(gSunShadowMap, projCoords.xy).r
If the shadow map is being projected correctly, then you know you have a problem in your depth comparisons. Hope this helps!
I want to draw the depth buffer in the fragment shader, I do this:
Vertex shader:
varying vec4 position_;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
position_ = gl_ModelViewProjectionMatrix * gl_Vertex;
Fragment shader:
float depth = ((position_.z / position_.w) + 1.0) * 0.5;
gl_FragColor = vec4(depth, depth, depth, 1.0);
But all I print is white, what am I doing wrong?
In what space do you want to draw the depth? If you want to draw the window-space depth, you can do this:
gl_FragColor = vec4(gl_FragCoord.z);
However, this will not be particularly useful, since most of the numbers will be very close to 1.0. Only extremely close objects will be visible. This is the nature of the distribution of depth values for a depth buffer using a standard perspective projection.
Or, to put it another way, that's why you're getting white.
If you want these values in a linear space, you will need to do something like the following:
float ndcDepth = ndcPos.z =
(2.0 * gl_FragCoord.z - gl_DepthRange.near - gl_DepthRange.far) /
(gl_DepthRange.far - gl_DepthRange.near);
float clipDepth = ndcDepth / gl_FragCoord.w;
gl_FragColor = vec4((clipDepth * 0.5) + 0.5);
Indeed, the "depth" value of a fragment can be read from it's z value in clip space (that is, after all matrix transformations). That much is correct.
However, your problem is in the division by w.
Division by w is called perspective divide. Yes, it is necessary for perspective projection to work correctly.
However. Division by w in this case "bunches up" all your values (as you have seen), to being very close to 1.0. There is a good reason for this: in a perspective projection, w= (some multiplier) *z. That is, you are dividing the z value (whatever it was computed out to be) by the (some factor of) original z. No wonder you always get values near 1.0. You're almost dividing z by itself.
As a very simple fix for this, try dividing z just by the farPlane, and send that to the fragment shader as depth.
Vertex shader
varying float DEPTH ;
uniform float FARPLANE ; // send this in as a uniform to the shader
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
DEPTH = gl_Position.z / FARPLANE ; // do not divide by w
Fragment shader:
varying float DEPTH ;
// far things appear white, near things black
gl_Color.rgb=vec3(DEPTH,DEPTH,DEPTH) ;
The result is a not-bad, very linear-looking fade.