OpenGL - Shadow cubemap sides have incorrect rotation? - opengl

This is more of a technical question than an actual programming question.
I'm trying to implement shadow mapping in my application, which was fairly straight forward for simple spotlights. However, for point lights I'm using shadow cubemaps, which I'm having a lot of trouble with.
After rendering my scene on the cubemap, this is my result:
(I've used glReadPixels to read the pixels of each side.)
Now, the object that should be casting the shadow is being drawn as it should be, what confuses me is the orientation of the sides of the cubemap. It seems to me that the left side (X-) should be connected with the bottom side (y-), so basically rotated by 90° clockwise:
I can't find any examples of how a shadow cubemap is supposed to look like, so I'm unsure whether there's actually something wrong with mine or if it's supposed to look like that. I'm fairly certain my matrices are set up correctly and the shaders for rendering to the shadow map are as simple as can be, so I have my doubts that there's anything wrong with them:
// Projection Matrix:
glm::perspective<float>(90.f,1.f,2.f,m_distance) // fov = 90, near plane = 1, far plane = 2, distance = the light's range
// View Matrices:
glm::lookAt(GetPosition(),GetPosition() +glm::vec3(1,0,0),glm::vec3(0,1,0));
glm::lookAt(GetPosition(),GetPosition() +glm::vec3(-1,0,0),glm::vec3(0,1,0));
glm::lookAt(GetPosition(),GetPosition() +glm::vec3(0,1,0),glm::vec3(0,0,-1));
glm::lookAt(GetPosition(),GetPosition() +glm::vec3(0,-1,0),glm::vec3(0,0,1));
glm::lookAt(GetPosition(),GetPosition() +glm::vec3(0,0,1),glm::vec3(0,1,0));
glm::lookAt(GetPosition(),GetPosition() +glm::vec3(0,0,-1),glm::vec3(0,1,0));
Vertex Shader:
#version 330 core
layout(location = 0) in vec3 vertexPosition_modelspace;
uniform mat4 depthMVP;
void main()
{
gl_Position = depthMVP *vec4(vertexPosition_modelspace,1.0);
}
Fragment Shader:
#version 330 core
layout(location = 0) out float fragmentdepth;
void main()
{
fragmentdepth = gl_FragCoord.z;
}
(I actually found these on another thread from here iirc)
Using this cubemap in the actual scene gives me odd results, but I don't know if my main fragment / vertex shaders are at fault here, or if my cubemap is incorrect in the first place, which makes debugging very difficult.
I'd basically just like to have confirmation / disconfirmation whether or not my shadow cubemap 'looks' right and, if it doesn't, what could be causing such behavior.
// Update:
Here's a video of how the shadowmap is updated: http://youtu.be/t9VRZy9uGvs
It looks right to me, could anyone confirm / disconfirm?

Related

Understanding shadow maps in OpenGL

I'm trying to implement omni-directional shadow mapping by following this tutorial from learnOpenGL, its idea is very simple: in the shadow pass, we're going to capture the scene from the light's perspective into a cubemap (shadow map), and we can use the geometry shader to build the depth cubemap with just one render pass. Here's the shader code for generating our shadow map:
vertex shader
#version 330 core
layout (location = 0) in vec3 aPos;
uniform mat4 model;
void main() {
gl_Position = model * vec4(aPos, 1.0);
}
geometry shader
#version 330 core
layout (triangles) in;
layout (triangle_strip, max_vertices=18) out;
uniform mat4 shadowMatrices[6];
out vec4 FragPos; // FragPos from GS (output per emitvertex)
void main() {
for (int face = 0; face < 6; ++face) {
gl_Layer = face; // built-in variable that specifies to which face we render.
for (int i = 0; i < 3; ++i) // for each triangle vertex {
FragPos = gl_in[i].gl_Position;
gl_Position = shadowMatrices[face] * FragPos;
EmitVertex();
}
EndPrimitive();
}
}
fragment shader
#version 330 core
in vec4 FragPos;
uniform vec3 lightPos;
uniform float far_plane;
void main() {
// get distance between fragment and light source
float lightDistance = length(FragPos.xyz - lightPos);
// map to [0;1] range by dividing by far_plane
lightDistance = lightDistance / far_plane;
// write this as modified depth
gl_FragDepth = lightDistance;
}
Compared to classic shadow mapping, the main difference here is that we are explicitly writing to the depth buffer, with linear depth values between 0.0 and 1.0. Using this code I can correctly cast shadows in my own scene, but I cannot fully understand the fragment shader, and I think this code is flawed, here is why:
Image that we have 3 spheres sitting on a floor, and a point light above the spheres. Looking down the floor from the point light, we can see the -z slice of the shadow map: (in RenderDoc textures are displayed bottom up, sorry for that).
If we write gl_FragDepth = lightDistance in the fragment shader, we are manually updating the depth buffer so the hardware cannot perform the early z test, as a result, every fragment will go through our shader code to update the depth buffer, no fragment is discarded early to save performance. Now what if we draw the floor after the spheres?
The sphere fragments will write to the depth buffer first (per sample), followed by the floor fragments, but since the floor is farther away from the point light, it will overwrite the depth values of the sphere with larger values, and the shadow map will be incorrect. In this case, the order of drawing is important, distant objects must be drawn first, but it's not always possible to sort depth values for complex geometry. Perhaps we need something like order-independent transparency here?
To make sure that only the closest depth values are written to the shadow map, I modified the fragment shader a little bit:
// solution 1
gl_FragDepth = min(gl_FragDepth, lightDistance);
// solution 2
if (lightDistance < gl_FragDepth) {
gl_FragDepth = lightDistance;
}
// solution 3
gl_FragDepth = 1.0;
gl_FragDepth = min(gl_FragDepth, lightDistance);
However, according to the OpenGL specification, none of them is going to work. Solution 2 cannot work because, if we were to update gl_FragDepth manually, we must update it in all execution paths. As for solution 1, when we clear the depth buffer using glClearNamedFramebufferfv(id, GL_DEPTH, 0, &clear_depth), the depth buffer will be filled with value clear_depth, which is usually 1.0, but the default value of gl_FragDepth variable is not the same as clear_depth, it is actually undefined, so could be anything between 0 and 1. On my driver the default value is 0, so gl_FragDepth = min(0.0, lightDistance) is 0, the shadow map will be completely black. Solution 3 also won't work because we are still overwriting the previous depth value.
I learned that for OpenGL 4.2 and above, we can enforce the early z test by redeclaring the gl_FragDepth variable using:
layout (depth_<condition>) out float gl_FragDepth;
since my depth comparision function is the default glDepthFunc(GL_LESS), the condition needs to be depth_greater in order for the hardware to do early z. Unfortunately, this also won't work as we are writing linear depth values to the buffer, which are always less than the default non-linear depth value gl_FragCoord.z, so the condition is really depth_less. Now I'm completely stuck, the depth buffer seems to be way more difficult than I thought.
Where might my reasoning be wrong?
You said:
The sphere fragments will write to the depth buffer first (per sample),
followed by the floor fragments, but since the floor is farther away from the
point light, it will overwrite the depth values of the sphere with larger
values, and the shadow map will be incorrect.
But if your fragment shader is not using early depth tests, then the hardware will perform depth testing after the fragment shader has executed.
From the OpenGL 4.6 specification, section 14.9.4:
When...the active program was linked with early fragment tests disabled,
these operations [including depth buffer test] are performed only after
fragment program execution
So if you write to gl_FragDepth in the fragment shader, the hardware cannot take advantage of the speed gain of early depth testing, as you said, but that doesn't mean that depth testing won't occur. So long as you are using GL_LESS or GL_LEQUAL for the depth test, objects that are further away won't obscure objects that are closer.

OpenGL Rendering Triangles with Positive Z-Values

It's my understanding that in NDC, the OpenGL "camera" is effectively located at (0,0,0), facing down the negative Z-axis. Because of this, anything with a positive Z-value should be behind the view, and not visible. I've seen this idea reiterated on multiple tutorials about coordinate systems in OpenGL.
I recently modified an orthographic renderer I've been working on to use proper GLM matrices instead of doing a bunch of separate addition and multiplication operations on each coordinate. I had previously been normalizing Z-values between 0 and -1, as I had believed this was the visible range.
However, when I was troubleshooting problems with the matrices, I noticed that triangles appeared to be rendering onscreen even when the final Z-values (after all transformations) were positive.
To make sure I wasn't forgetting about a transformation at some stage that re-normalized Z-values to the (0, -1) range, I tried forcing the Z-value of every vertex to a specific positive value (0.9) in the vertex shader:
#version 330 core
uniform mat4 svp; //sprite-view-projection matrix
layout (location = 0) in vec3 position;
layout (location = 1) in vec4 colorIn;
layout (location = 2) in vec2 texCoordsIn;
out vec4 color;
out vec2 texCoords;
void main()
{
vec4 temp = svp * vec4(position, 1.0);
temp.z = 0.9;
gl_Position = temp;
//gl_Position = svp * vec4(position, 1.0);
color = colorIn;
texCoords = texCoordsIn;
}
To my surprise, everything is rendered anyway.
Using a constant Z-value of -0.9 produces identical results. If I change the constant value in the vertex shader to be greater than or equal to 1.0, nothing renders. It's almost as if the camera is located at (0,0,1) facing down the negative Z-axis.
I'm aware that matrices can effectively change the location of the camera, but only through transformations on the vertices. If I set the z-value to be positive in the vertex shader, after all transformations, shouldn't it definitely be invisible?
I've gotten the renderer to work more or less as I intended with matrices, but this behavior is confusing and challenges my understanding of the OpenGL coordinate system(s).

Vertex distortion of large plane from certain angles [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I'm trying to write a shader that creates a grid on my ground plane which is rather large (vertex coordinates are around 1000.0 or more). It was working fine until I realized that from a certain view angle some of the vertices seem "shifted":
When breaking it down, it becomes clear that the shader itself is not the problem. The same thing happens when stripping the shader of almost everything and just showing the vertex coordinates as color:
#version 330 core
in vec3 VertexPos;
out vec4 outColor;
void main()
{
outColor = vec4(VertexPos.xz, 0, 1);
}
The shift becomes worse when I move the camera closer and gets better when I move it further away (or disappears if I move it slightly left or right).
Now the position and angle aren't arbitrary. The plane is simply made up of two triangles creating a quad. However for reasons it is not drawn with 4 vertices and 6 indices but instead with 6 vertices and 6 indices. So it is actually drawn like this (the gap isn't really there of course):
As you might have guessed the shift happens at the edge where the two triangles meet. It also seems to me that it only happens when this edge is perfectly horizontal in my final image.
To avoid the problem I could scale down the plane quite a bit (which I don't want) or probably draw it with only four vertices (haven't tried it though).
Nonetheless I'd really like to find the root of the problem. I suspect it has something to do with floating point precision when clipping the vertices that are outside the screen or something like that but I can't quite put my finger on it.
Any ideas?
EDIT: The vertex shader. Nothing special going on here really:
#version 330 core
layout(location = 0) in vec3 position;
layout(std140) uniform GlobalMatrices
{
mat4 viewProjection;
vec3 camPos;
vec3 lightCol;
vec3 lightPos;
vec3 ambient;
};
uniform mat4 transform;
out vec3 VertexPos;
void main()
{
vec4 transformed = transform * vec4(position, 1.0);
VertexPos = transformed.xyz;
gl_Position = viewProjection * transformed;
}
EDIT 2:
According to renderdoc there's nothing wrong with my vertex attributes or coordinates either:

OpenGL vertices jitter when moving - 2D scene

I am working on a 2d project and I noticed the following issue:
As you can see in the gif above, when the object is making small movements, its vertices jitter.
To render, every frame I clear a VBO, calculate the new positions of the vertices and then insert them to the VBO. Every frame, I create the exact same structure, but from a different origin.
Is there a way to get smooth motion even when the displacement between each frame is so minor?
I am using SDL2 so double buffering is enabled by default.
This is a minor issue, but it becomes very annoying once I apply a texture to the model.
Here is the vertex shader I am using:
#version 330 core
layout (location = 0) in vec2 in_position;
layout (location = 1) in vec2 in_uv;
layout (location = 2) in vec3 in_color;
uniform vec2 camera_position, camera_size;
void main() {
gl_Position = vec4(2 * (in_position - camera_position) / camera_size, 0.0f, 1.0f);
}
What you see is caused by the rasterization algorithm. Consider the following two rasterizations of the same geometry (red lines) offset by only half a pixel:
As can be seen, shifting by just half a pixel can change the perceived spacing between the vertical lines from three pixels to two pixels. Moreover, the horizontal lines didn't shift, therefore their appearance didn't change.
This inconsistent behavior is what manifests as "wobble" in your animation.
One way to solve this is to enable anti-aliasing with glEnable(GL_LINE_SMOOTH). Make sure to have correct blending enabled. This will, however, result in blurred lines when they fall right between the pixels.
If instead you really need the crisp jagged line look (eg pixel art), then you need to make sure that your geometry only ever moves by an integer number of pixels:
vec2 scale = 2/camera_size;
vec2 offset = -scale*camera_position;
vec2 pixel_size = 2/viewport_size;
offset = round(offset/pixel_size)*pixel_size; // snap to pixels
gl_Position = vec4(scale*in_position + offset, 0.0f, 1.0f);
Add viewport_size as a uniform.

OpenGL shadow map issue

I implemented a fairly simple shadow map. I have a simple obj imported plane as ground and a bunch of trees.
I have a weird shadow on the plane which I think is the plane's self shadow. I am not sure what code to post. If it would help please tell me and I'll do so then.
First image, camera view of the scene. The weird textured lowpoly sphere is just for reference of the light position.
Second image, the depth texture stored in the framebuffer. I calculated shadow coords from light perspective with it. Since I can't post more than 2 links, I'll leave this one.
Third image, depth texture with a better view of the plane projecting the shadow from a different light position above the whole scene.
LE: the second picture http://i41.tinypic.com/23h3wqf.jpg (Depth Texture of first picture)
Tried some fixes, adding glCullFace(GL_BACK) before drawing the ground in the first pass removes it from the depth texture but still appears in the final render(like in the first picture, the back part of the ground) - i tried adding CullFace in the second pass also, still showing the shadow on the ground , tried all combinations of Front and Back facing. Can it be because of the values in the ortographic projection ?
Shadow fragment shader:
#version 330 core
layout(location = 0) out vec3 color;
in vec2 texcoord;
in vec4 ShadowCoord;
uniform sampler2D textura1;
uniform sampler2D textura2;
uniform sampler2D textura_depth;
uniform int has_alpha;
void main(){
vec3 tex1 = texture(textura1, texcoord).xyz;
vec3 tex2 = texture(textura2, texcoord).xyz;
if(has_alpha>0.5) if((tex2.r<0.1) && (tex2.g<0.1) && (tex2.b<0.1)) discard;
//Z value of depth texture from pass 1
float hartaDepth=texture( textura_depth,(ShadowCoord.xy/ShadowCoord.w)).z;
float shadowValue=1.0;
if(hartaDepth < ShadowCoord.z-0.005)
shadowValue=0.5;
color = shadowValue * tex1 ;
}