Pointers on modern OpenGL shadow cubemapping? - c++

Background
I am working on a 3D game using C++ and modern OpenGL (3.3). I am now working on the lighting and shadow rendering, and I've successfully implemented directional shadow mapping. After reading over the requirements for the game I have decided that I'd be needing point light shadow mapping. After doing some research, I discovered that to do omnidirectional shadow mapping I will do something similar to directional shadow mapping, but with a cubemap instead.
I have no previous knowledge of cubemaps but my understanding of them is that a cubemap is six textures, seamlessly attached.
I did some looking around but unfortunately I struggled to find a definitive "tutorial" on the subject for modern OpenGL. I look for tutorials first that explain it from start to finish because I seriously struggled to learn from snippets of source code or just concepts, but I tried.
Current understandings
Here is my general understanding of the idea, minus the technicalities. Please correct me.
For each point light, a framebuffer is set up, like directional shadowmapping
A single cubemap texture is then generated, and bound with glBindTexture(GL_TEXTURE_CUBE_MAP, shadowmap).
The cubemap is set up with the following attributes:
glTexParameteri(GL_TEXTURE_CUBE_MAP_ARB, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP_ARB, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP_ARB, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
(this is also similar to directional shadowmapping)
Now glTexImage2D() is iterated through six times, once for each face. I do that like this:
for (int face = 0; face < 6; face++) // Fill each face of the shadow cubemap
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, 0, GL_DEPTH_COMPONENT32F , 1024, 1024, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
The texture is attached to the framebuffer with a call to
glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, shadowmap, 0);
When the scene is to be rendered, it is rendered in two passes, like directional shadow mapping.
First of all, the shadow framebuffer is bound, the viewport is adjusted to the size of the shadowmap (1024 by 1024 in this case).
Culling is set to the front faces with glCullFace(GL_FRONT)
The active shader program is switched to the vertex and fragment shadow shaders that I will provide the sources of further down
The light view matrices for all six views are calculated. I do it by creating a vector of glm::mat4's and push_back() the matrices, like this:
// Create the six view matrices for all six sides
for (int i = 0; i < renderedObjects.size(); i++) // Iterate through all rendered objects
{
renderedObjects[i]->bindBuffers(); // Bind buffers for rendering with it
glm::mat4 depthModelMatrix = renderedObjects[i]->getModelMatrix(); // Set up model matrix
for (int i = 0; i < 6; i++) // Draw for each side of the light
{
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, shadowmap, 0);
glClear(GL_DEPTH_BUFFER_BIT); // Clear depth buffer
// Send MVP for shadow map
glm::mat4 depthMVP = depthProjectionMatrix * depthViewMatrices[i] * depthModelMatrix;
glUniformMatrix4fv(glGetUniformLocation(shadowMappingProgram, "depthMVP"), 1, GL_FALSE, glm::value_ptr(depthMVP));
glUniformMatrix4fv(glGetUniformLocation(shadowMappingProgram, "lightViewMatrix"), 1, GL_FALSE, glm::value_ptr(depthViewMatrices[i]));
glUniformMatrix4fv(glGetUniformLocation(shadowMappingProgram, "lightProjectionMatrix"), 1, GL_FALSE, glm::value_ptr(depthProjectionMatrix));
glDrawElements(renderedObjects[i]->getDrawType(), renderedObjects[i]->getElementSize(), GL_UNSIGNED_INT, 0);
}
}
The default framebuffer is bound, and the scene is drawn normally.
Issue
Now, to the shaders. This is where my understanding runs dry. I am completely unsure on what I should do, my research seems to conflict with eachother, because it's for different versions. I ended up blandly copying and pasting code from random sources, and hoping it'd achieve something other than a black screen. I know this is terrible, but there doesn't seem to be any clear definitions on what to do. What spaces do I work in? Do I even need a separate shadow shader, like I used in directional point lighting? What the hell do I use as the type for a shadow cubemap? samplerCube? samplerCubeShadow? How do I sample said cubemap properly? I hope that someone can clear it up for me and provide a nice explanation.
My current understanding of the shader part is:
- When the scene is being rendered into the cubemap, the vertex shader simply takes the depthMVP uniform I calculated in my C++ code and transforms the input vertices by them.
- The fragment shader of the cubemap pass simply assigns the single out value to the gl_FragCoord.z. (This part is unchanged from when I implemented directional shadow mapping. I assumed it would be the same for cubemapping because the shaders don't even interact with the cubemap - OpenGL simply renders the output from them to the cubemap, right? Because it's a framebuffer?)
The vertex shader for the normal rendering is unchanged.
In the fragment shader for normal rendering, the vertex position is transformed into the light's space with the light's projection and view matrix.
That's somehow used in the cubemap texture lookup. ???
Once the depth has been retrieved using magical means, it is compared to the distance of the light to the vertex, much like directional shadowmapping. If it's less, that point must be shadowed, and vice-versa.
It's not much of an understanding. I go blank as to how the vertices are transformed and used to lookup the cubemap, so I'm going to paste the source for my shaders, in hope that people can clarify this. Please note that a lot of this code is blind copying and pasting, I haven't altered anything as to not jeopardise any understanding.
Shadow vertex shader:
#version 150
in vec3 position;
uniform mat4 depthMVP;
void main()
{
gl_Position = depthMVP * vec4(position, 1);
}
Shadow fragment shader:
#version 150
out float fragmentDepth;
void main()
{
fragmentDepth = gl_FragCoord.z;
}
Standard vertex shader:
#version 150
in vec3 position;
in vec3 normal;
in vec2 texcoord;
uniform mat3 modelInverseTranspose;
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
out vec3 fragnormal;
out vec3 fragnormaldirection;
out vec2 fragtexcoord;
out vec4 fragposition;
out vec4 fragshadowcoord;
void main()
{
fragposition = modelMatrix * vec4(position, 1.0);
fragtexcoord = texcoord;
fragnormaldirection = normalize(modelInverseTranspose * normal);
fragnormal = normalize(normal);
fragshadowcoord = projectionMatrix * viewMatrix * modelMatrix * vec4(position, 1.0);
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(position, 1.0);
}
Standard fragment shader:
#version 150
out vec4 outColour;
in vec3 fragnormaldirection;
in vec2 fragtexcoord;
in vec3 fragnormal;
in vec4 fragposition;
in vec4 fragshadowcoord;
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
uniform mat4 viewMatrixInversed;
uniform mat4 lightViewMatrix;
uniform mat4 lightProjectionMatrix;
uniform sampler2D tex;
uniform samplerCubeShadow shadowmap;
float VectorToDepthValue(vec3 Vec)
{
vec3 AbsVec = abs(Vec);
float LocalZcomp = max(AbsVec.x, max(AbsVec.y, AbsVec.z));
const float f = 2048.0;
const float n = 1.0;
float NormZComp = (f+n) / (f-n) - (2*f*n)/(f-n)/LocalZcomp;
return (NormZComp + 1.0) * 0.5;
}
float ComputeShadowFactor(samplerCubeShadow ShadowCubeMap, vec3 VertToLightWS)
{
float ShadowVec = texture(ShadowCubeMap, vec4(VertToLightWS, 1.0));
if (ShadowVec + 0.0001 > VectorToDepthValue(VertToLightWS)) // To avoid self shadowing, I guess
return 1.0;
return 0.7;
}
void main()
{
vec3 light_position = vec3(0.0, 0.0, 0.0);
vec3 VertToLightWS = light_position - fragposition.xyz;
outColour = texture(tex, fragtexcoord) * ComputeShadowFactor(shadowmap, VertToLightWS);
}
I can't remember where the ComputerShadowFactor and VectorToDepthValue function code came from, because I was researching it on my laptop which I can't get to right now, but this is the result of those shaders:
It is a small square of unshadowed space surrounded by shadowed space.
I am obviously doing a lot wrong here, probably centered on my shaders, due to a lack of knowledge on the subject because I find it difficult to learn from anything but tutorials, and I am very sorry for that. I am at a loss it it would be wonderful if someone can shed light on this with a clear explanation on what I am doing wrong, why it's wrong, how I can fix it and maybe even some code. I think the issue may be because I am working in the wrong spaces.

I hope to provide an answer to some of your questions, but first some definitions are required:
What is a cubemap?
It is a map from a direction vector to a pair of [face, 2d coordinates on that face], obtained by projecting the direction vector on an hypothetical cube.
What is an OpenGL cubemap texture?
It is a set of six "images".
What is a GLSL cubemap sampler?
It is a sampler primitive from which cubemap sampling can be done. This mean that it is sampled using a direction vector instead of the usual texture coordinates. The hardware then project the direction vector on an hypothetical cube and use the resulting [face, 2d texture coordinate] pair to sample the right "image" at the right 2d position.
What is a GLSL shadow sampler?
It is a sampler primitive that is bounded to a texture containing NDC-space depth values and, when sampled using the shadow-specific sampling functions, return a "comparison" between a NDC-space depth (in the same space of the shadow map, obviously) and the NDC-space depth stored inside the bounded texture. The depth to compare against is specified as an additional element in the texture coordinates when calling the sampling function. Note that shadow samplers are provided for ease of use and speed, but it is always possible to do the comparison "manually" in the shader.
Now, for your questions:
OpenGL simply renders [...] to the cubemap, right?
No, OpenGL render to a set of targets in the currently bounded framebuffer.
In the case of cubemaps, the usual way to render in them is:
to create them and attach each of their six "images" to the same
framebuffer (at different attachment points, obviously)
to enable only one of the target at a time (so, you render in each cubemap face individually)
to render what you want in the cubemap face (possibly using face-specific "view" and "projection" matrices)
Point-light shadow maps
In addition to everything said about cubemaps, there are a number of problems in using them to implement point-light shadow mapping and so the hardware depth comparison is rarely used.
Instead, what is common pratice is the following:
instead of writing NDC-space depth, write radial distance from the
point light
when querying the shadow map (see sample code at bottom):
do not use hardware depth comparisons (use samplerCube instead of samplerCubeShadow)
transform the point to be tested in the "cube space" (that do not include projection at all)
use the "cube-space" vector as the lookup direction to sample the cubemap
compare the radial distance sampled from the cubemap with the radial distance of the tested point
Sample code
// sample radial distance from the cubemap
float radial_dist = texture(my_cubemap, cube_space_vector).x;
// compare against test point radial distance
bool shadowed = length(cube_space_vector) > radial_dist;

Related

Understanding shadow maps in OpenGL

I'm trying to implement omni-directional shadow mapping by following this tutorial from learnOpenGL, its idea is very simple: in the shadow pass, we're going to capture the scene from the light's perspective into a cubemap (shadow map), and we can use the geometry shader to build the depth cubemap with just one render pass. Here's the shader code for generating our shadow map:
vertex shader
#version 330 core
layout (location = 0) in vec3 aPos;
uniform mat4 model;
void main() {
gl_Position = model * vec4(aPos, 1.0);
}
geometry shader
#version 330 core
layout (triangles) in;
layout (triangle_strip, max_vertices=18) out;
uniform mat4 shadowMatrices[6];
out vec4 FragPos; // FragPos from GS (output per emitvertex)
void main() {
for (int face = 0; face < 6; ++face) {
gl_Layer = face; // built-in variable that specifies to which face we render.
for (int i = 0; i < 3; ++i) // for each triangle vertex {
FragPos = gl_in[i].gl_Position;
gl_Position = shadowMatrices[face] * FragPos;
EmitVertex();
}
EndPrimitive();
}
}
fragment shader
#version 330 core
in vec4 FragPos;
uniform vec3 lightPos;
uniform float far_plane;
void main() {
// get distance between fragment and light source
float lightDistance = length(FragPos.xyz - lightPos);
// map to [0;1] range by dividing by far_plane
lightDistance = lightDistance / far_plane;
// write this as modified depth
gl_FragDepth = lightDistance;
}
Compared to classic shadow mapping, the main difference here is that we are explicitly writing to the depth buffer, with linear depth values between 0.0 and 1.0. Using this code I can correctly cast shadows in my own scene, but I cannot fully understand the fragment shader, and I think this code is flawed, here is why:
Image that we have 3 spheres sitting on a floor, and a point light above the spheres. Looking down the floor from the point light, we can see the -z slice of the shadow map: (in RenderDoc textures are displayed bottom up, sorry for that).
If we write gl_FragDepth = lightDistance in the fragment shader, we are manually updating the depth buffer so the hardware cannot perform the early z test, as a result, every fragment will go through our shader code to update the depth buffer, no fragment is discarded early to save performance. Now what if we draw the floor after the spheres?
The sphere fragments will write to the depth buffer first (per sample), followed by the floor fragments, but since the floor is farther away from the point light, it will overwrite the depth values of the sphere with larger values, and the shadow map will be incorrect. In this case, the order of drawing is important, distant objects must be drawn first, but it's not always possible to sort depth values for complex geometry. Perhaps we need something like order-independent transparency here?
To make sure that only the closest depth values are written to the shadow map, I modified the fragment shader a little bit:
// solution 1
gl_FragDepth = min(gl_FragDepth, lightDistance);
// solution 2
if (lightDistance < gl_FragDepth) {
gl_FragDepth = lightDistance;
}
// solution 3
gl_FragDepth = 1.0;
gl_FragDepth = min(gl_FragDepth, lightDistance);
However, according to the OpenGL specification, none of them is going to work. Solution 2 cannot work because, if we were to update gl_FragDepth manually, we must update it in all execution paths. As for solution 1, when we clear the depth buffer using glClearNamedFramebufferfv(id, GL_DEPTH, 0, &clear_depth), the depth buffer will be filled with value clear_depth, which is usually 1.0, but the default value of gl_FragDepth variable is not the same as clear_depth, it is actually undefined, so could be anything between 0 and 1. On my driver the default value is 0, so gl_FragDepth = min(0.0, lightDistance) is 0, the shadow map will be completely black. Solution 3 also won't work because we are still overwriting the previous depth value.
I learned that for OpenGL 4.2 and above, we can enforce the early z test by redeclaring the gl_FragDepth variable using:
layout (depth_<condition>) out float gl_FragDepth;
since my depth comparision function is the default glDepthFunc(GL_LESS), the condition needs to be depth_greater in order for the hardware to do early z. Unfortunately, this also won't work as we are writing linear depth values to the buffer, which are always less than the default non-linear depth value gl_FragCoord.z, so the condition is really depth_less. Now I'm completely stuck, the depth buffer seems to be way more difficult than I thought.
Where might my reasoning be wrong?
You said:
The sphere fragments will write to the depth buffer first (per sample),
followed by the floor fragments, but since the floor is farther away from the
point light, it will overwrite the depth values of the sphere with larger
values, and the shadow map will be incorrect.
But if your fragment shader is not using early depth tests, then the hardware will perform depth testing after the fragment shader has executed.
From the OpenGL 4.6 specification, section 14.9.4:
When...the active program was linked with early fragment tests disabled,
these operations [including depth buffer test] are performed only after
fragment program execution
So if you write to gl_FragDepth in the fragment shader, the hardware cannot take advantage of the speed gain of early depth testing, as you said, but that doesn't mean that depth testing won't occur. So long as you are using GL_LESS or GL_LEQUAL for the depth test, objects that are further away won't obscure objects that are closer.

OpenGL shader to shade each face similar to MeshLab's visualizer

I have very basic OpenGL knowledge, but I'm trying to replicate the shading effect that MeshLab's visualizer has.
If you load up a mesh in MeshLab, you'll realize that if a face is facing the camera, it is completely lit and as you rotate the model, the lighting changes as the face that faces the camera changes. I loaded a simple unit cube with 12 faces in MeshLab and captured these screenshots to make my point clear:
Model loaded up (notice how the face is completely gray):
Model slightly rotated (notice how the faces are a bit darker):
More rotation (notice how all faces are now darker):
Off the top of my head, I think the way it works is that it is somehow assigning colors per face in the shader. If the angle between the face normal and camera is zero, then the face is fully lit (according to the color of the face), otherwise it is lit proportional to the dot product between the normal vector and the camera vector.
I already have the code to draw meshes with shaders/VBO's. I can even assign per-vertex colors. However, I don't know how I can achieve a similar effect. As far as I know, fragment shaders work on vertices. A quick search revealed questions like this. But I got confused when the answers talked about duplicate vertices.
If it makes any difference, in my application I load *.ply files which contain vertex position, triangle indices and per-vertex colors.
Results after the answer by #DietrichEpp
I created the duplicate vertices array and used the following shaders to achieve the desired lighting effect. As can be seen in the posted screenshot, the similarity is uncanny :)
The vertex shader:
#version 330 core
uniform mat4 projection_matrix;
uniform mat4 model_matrix;
uniform mat4 view_matrix;
in vec3 in_position; // The vertex position
in vec3 in_normal; // The computed vertex normal
in vec4 in_color; // The vertex color
out vec4 color; // The vertex color (pass-through)
void main(void)
{
gl_Position = projection_matrix * view_matrix * model_matrix * vec4(in_position, 1);
// Compute the vertex's normal in camera space
vec3 normal_cameraspace = normalize(( view_matrix * model_matrix * vec4(in_normal,0)).xyz);
// Vector from the vertex (in camera space) to the camera (which is at the origin)
vec3 cameraVector = normalize(vec3(0, 0, 0) - (view_matrix * model_matrix * vec4(in_position, 1)).xyz);
// Compute the angle between the two vectors
float cosTheta = clamp( dot( normal_cameraspace, cameraVector ), 0,1 );
// The coefficient will create a nice looking shining effect.
// Also, we shouldn't modify the alpha channel value.
color = vec4(0.3 * in_color.rgb + cosTheta * in_color.rgb, in_color.a);
}
The fragment shader:
#version 330 core
in vec4 color;
out vec4 out_frag_color;
void main(void)
{
out_frag_color = color;
}
The uncanny results with the unit cube:
It looks like the effect is a simple lighting effect with per-face normals. There are a few different ways you can achieve per-face normals:
You can create a VBO with a normal attribute, and then duplicate vertex position data for faces which don't have the same normal. For example, a cube would have 24 vertexes instead of 8, because the "duplicates" would have different normals.
You can use a geometry shader which calculates a per-face normal.
You can use dFdx() and dFdy() in the fragment shader to approximate the normal.
I recommend the first approach, because it is simple. You can simply calculate the normals ahead of time in your program, and then use them to calculate the face colors in your vertex shader.
This is simple flat shading, instead of using per vertex normals you can evaluate per face normal with this GLSL snippet:
vec3 x = dFdx(FragPos);
vec3 y = dFdy(FragPos);
vec3 normal = cross(x, y);
vec3 norm = normalize(normal);
then apply some diffuse lighting using norm:
// diffuse light 1
vec3 lightDir1 = normalize(lightPos1 - FragPos);
float diff1 = max(dot(norm, lightDir1), 0.0);
vec3 diffuse = diff1 * diffColor1;

Rendering orthographic shadowmap to screen?

I have a working shadow map implementation for directional lights, where I construct the projection matrix using orthographic projection. My question is, how do I visualize the shadow map? I have the following shader I use for spot lights (which uses a perspective projection) but when I try to apply it to a shadow map that was made with an orthographic projection all I get is a completely black screen (even though the shadow mapping works when renderering the scene itself)
#version 430
layout(std140) uniform;
uniform UnifDepth
{
mat4 mWVPMatrix;
vec2 mScreenSize;
float mZNear;
float mZFar;
} UnifDepthPass;
layout (binding = 5) uniform sampler2D unifDepthTexture;
out vec4 fragColor;
void main()
{
vec2 texcoord = gl_FragCoord.xy / UnifDepthPass.mScreenSize;
float depthValue = texture(unifDepthTexture, texcoord).x;
depthValue = (2.0 * UnifDepthPass.mZNear) / (UnifDepthPass.mZFar + UnifDepthPass.mZNear - depthValue * (UnifDepthPass.mZFar - UnifDepthPass.mZNear));
fragColor = vec4(depthValue, depthValue, depthValue, 1.0);
}
You were trying to sample your depth texture with GL_TEXTURE_COMPARE_MODE set to GL_COMPARE_R_TO_TEXTURE. This is great for actually performing shadow mapping with a depth texture, but it makes trying to sample it using sampler2D undefined. Since you want the actual depth values stored in the depth texture, and not the result of a pass/fail depth test, you need to set GL_TEXTURE_COMPARE_MODE to GL_NONE first.
It is very inconvenient to set this state on a per-texture basis when you want to switch between visualizing the depth buffer and drawing shadows. I would suggest using a sampler object that has GL_TEXTURE_COMPARE_MODE set to GL_COMPARE_R_TO_TEXTURE (compatible with sampler2DShadow) for the shader that does shadow mapping and another sampler object that uses GL_NONE (compatible with sampler2D) for visualizing the depth buffer. That way all you have to do is swap out the sampler object bound to texture image unit 5 depending on how the shader actually uses the depth texture.

Odd OpenGL shadow mapping behaviour

I am working on a 3D game in C++ and OpenGL 3.2 with SFML. I have been struggling to implement point light shadow mapping. What I have done so far seems to conform to what I have learnt and examples I have seen, but still, no shadows.
What I have done is write a simplistic list of all the code I use in the exact order I use it, but not as full source code, only code that is relevant (because my project is split up in several classes):
Omnidirectional shadow mapping
C++
- Initialization
-- Use shadow pass shader program
-- Generate + bind the shadow frame buffer
glGenFramebuffers(1, &shadowFrameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, shadowFrameBuffer);
-- Generate a texture
glGenTextures(1, &shadowMap);
-- Bind texture as cubemap
glBindTexture(GL_TEXTURE_CUBE_MAP);
-- Set texture parameters
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
-- Generate empty 1024 x 1024 for every face of the cube
for (int face = 0; face < 6; face++)
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, 0, GL_DEPTH_COMPONENT32F , 1024, 1024, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
-- Attach the cubemap to the framebuffer
glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, shadowMap, 0);
-- Only draw depth to framebuffer
glDrawBuffer(GL_NONE);
- Every frame
-- Clear screen
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
-- Render shadow map
--- Bind shadow frame buffer
glBindFramebuffer(GL_FRAMEBUFFER, shadowFrameBuffer);
--- Set the viewport to the size of the shadow map
glViewport(0, 0, 1024, 1024);
-- Cull front faces
glCullFace(GL_FRONT);
-- Use shadow mapping program
--- Define projection matrix for rendering each face
glm::mat4 depthProjectionMatrix = glm::perspective(90.0f, 1.0f, 1.0f, 10.0f);
--- Define view matrices for all six faces
std::vector<glm::mat4> depthViewMatrices;
depthViewMatrices.push_back(glm::lookAt(lightInvDir, glm::vec3(1,0,0), glm::vec3(0,-1,0) )); // +X
depthViewMatrices.push_back(glm::lookAt(lightInvDir, glm::vec3(-1,0,0), glm::vec3(0,1,0) )); // -X
depthViewMatrices.push_back(glm::lookAt(lightInvDir, glm::vec3(0,1,0), glm::vec3(0,0,1) )); // +Y
depthViewMatrices.push_back(glm::lookAt(lightInvDir, glm::vec3(0,-1,0), glm::vec3(0,0,-1) )); // -Y
depthViewMatrices.push_back(glm::lookAt(lightInvDir, glm::vec3(0,0,1), glm::vec3(0,-1,0) )); // +Z
depthViewMatrices.push_back(glm::lookAt(lightInvDir, glm::vec3(0,0,-1), glm::vec3(0,1,0) )); // -Z
--- For every object in the scene
---- Bind the VBO of the object
---- Define the model matrix for the object based on its position and orientation
---- For all six sides of the cube
----- Set the correct side to render to
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, shadowMap, 0);
----- Clear depth buffer
glClear(GL_DEPTH_BUFFER_BIT);
----- Send model, view and projection matrices to shadow mapping shader
glUniformMatrix4fv(glGetUniformLocation(shadowMapper, "lightModelMatrix"), 1, GL_FALSE, glm::value_ptr(depthModelMatrix));
glUniformMatrix4fv(glGetUniformLocation(shadowMapper, "lightViewMatrix"), 1, GL_FALSE, glm::value_ptr(depthViewMatrices[i]));
glUniformMatrix4fv(glGetUniformLocation(shadowMapper, "lightProjectionMatrix"), 1, GL_FALSE, glm::value_ptr(depthProjectionMatrix));
----- Draw the object
glDrawElements(....);
- END SHADOW MAP DRAW
-- Cull back faces
glCullFace(GL_BACK);
-- Use standard shader program
-- Bind default framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, 0);
-- Activate cubemap texture
glActiveTexture(GL_TEXTURE1);
-- Bind cubemap texture
glBindTexture(GL_TEXTURE_CUBE_MAP, shadowMap);
-- Tell shader to use first texture
glUniform1i(glGetUniformLocation(currentProgram->id, "shadowmap"), 1);
-- Send standard MVPs and draw objects
glDrawElements(...);
- END C++
=================================
GLSL
shadowpass vertex shader source
#version 150
in vec3 position;
out vec3 worldPosition;
uniform mat4 lightModelMatrix;
uniform mat4 lightViewMatrix;
uniform mat4 lightProjectionMatrix;
void main()
{
gl_Position = lightProjectionMatrix * lightViewMatrix * lightModelMatrix * vec4(position, 1.0);
worldPosition = (lightModelMatrix * vec4(position, 1.0)).xyz; // Send world position of vertex to fragment shader
}
shadowpass fragment shader source
#version 150
in vec3 worldPosition; // Vertex position in world space
out float distance; // Distance from vertex position to light position
vec3 lightWorldPosition = vec3(0.0, 0.0, 0.0); // Light position in world space
void main()
{
distance = length(worldPosition - lightWorldPosition); // Distance from point to light
// Distance will be written to the cubemap
}
standard vertex shader source
#version 150
in vec3 position;
in vec3 normal;
in vec2 texcoord;
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
out vec3 fragnormal;
out vec3 fragnormaldirection;
out vec2 fragtexcoord;
out vec4 fragposition;
out vec4 fragshadowcoord;
void main()
{
fragposition = vec4(position, 1.0); // Position of vertex in object space
fragtexcoord = texcoord;
fragnormaldirection = normalize(modelInverseTranspose * normal);
fragnormal = normalize(normal);
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(position, 1.0);
}
standard fragment shader source
#version 150
out vec4 outColour;
in vec3 fragnormaldirection;
in vec2 fragtexcoord;
in vec3 fragnormal;
in vec4 fragposition;
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
uniform mat4 viewMatrixInversed;
uniform sampler2D tex;
uniform samplerCube shadowmap;
void main()
{
vec3 lightpos = vec3(0.0, 0.0, 0.0);
vec3 pointToLight = (fragposition * modelMatrix).xyz - lightpos; // Get vector between this point and the light
float dist = texture(shadowmap, pointToLight).x; // Get distance written in texture
float shadowfactor = 1.0;
if (length(pointToLight) > dist) // Is it occluded?
shadowfactor = 0.5;
outColour = texture(tex, fragtexcoord) * shadowfactor;
}
Here is a picture of what my code does now:
This is a strange effect but seems to be close to what I meant. It seems that any surface exposed to the light at 0, 0, 0 has an unshadowed circle at the center of it while everything else is unshadowed.
One very useful way of debugging shadow maps ins indeed to have a way to display the content of the shadow maps as quads on the screen. 6 quads in case of cube shadow maps. that could be implemented as a debug easter egg where you can display the full texture on the whole screen and 'go to next face' so you can skid the 6 faces with another key combo.
Then, one of the most important things in cubic shadow maps is the depth range. A point light doesn't have an infinite range, so generally you want to scale your depth storage to match the light range.
You can use floating point 16 bits luminance (or red channel) texture to store a world depth (spherical, meaning the true length(ray-to-intersection) using a little calculation in the pixel shader)
Or you can use linear depth (the same kind that is stored in a classic ZBuffer, which is the depth of the normalized device coordinates. That is the depth after the projection matrix. In which case, to reconstruct the world position once in the lighting shader (next pass), the issue is to be sure to divide by w after you multiplied by the camera-cube-face inverse view*projection.
The key to debugging shadow maps is all in shader twiddling. Start by using colors to vizualize the depth stored in your shadow maps as perceived by the pixels of your world. It was the only way that helped be fix point shadow maps in my company's engine. You can make a color code using a combination of mix and clamp like blue from 0 to 0.3, red from 0.3 to 0.6, green from 0.6 to 1. If you have world distance storage it is easier, but is still interesting to vizu it through color codes. just use the same function but dividing the distance by your expected world range.
Using that vizu scheme you'll be able to see the shadowed zone right away because they all bear the same color (since it the 'ray' was intercepted by a closer surface). Once you get to that point; the rest will all go smoothly.
good luck :)

OpenGL issue: cannot render geometry on screen

My program was meant to draw a simple textured cube on screen, however, I cannot get it to render anything other than the clear color. This is my draw function:
void testRender() {
glClearColor(.25f, 0.35f, 0.15f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glUniformMatrix4fv(resources.uniforms.m4ModelViewProjection, 1, GL_FALSE, (const GLfloat*)resources.modelviewProjection.modelViewProjection);
glEnableVertexAttribArray(resources.attributes.vTexCoord);
glEnableVertexAttribArray(resources.attributes.vVertex);
//deal with vTexCoord first
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,resources.hiBuffer);
glBindBuffer(GL_ARRAY_BUFFER, resources.htcBuffer);
glVertexAttribPointer(resources.attributes.vTexCoord,2,GL_FLOAT,GL_FALSE,sizeof(GLfloat)*2,(void*)0);
//now the other one
glBindBuffer(GL_ARRAY_BUFFER,resources.hvBuffer);
glVertexAttribPointer(resources.attributes.vVertex,3,GL_FLOAT,GL_FALSE,sizeof(GLfloat)*3,(void*)0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, resources.htextures[0]);
glUniform1i(resources.uniforms.colorMap, 0);
glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_SHORT, (void*)0);
//clean up a bit
};
In addition, here is the vertex shader:
#version 330
in vec3 vVertex;
in vec2 vTexCoord;
uniform mat4 m4ModelViewProjection;
smooth out vec2 vVarryingTexCoord;
void main(void) {
vVarryingTexCoord = vTexCoord;
gl_Position = m4ModelViewProjection * vec4(vVertex, 1.0);
};
and the fragment shader (I have given up on textures for now):
#version 330
uniform sampler2D colorMap;
in vec2 vVarryingTexCoord;
out vec4 vVaryingFragColor;
void main(void) {
vVaryingFragColor = texture(colorMap, vVarryingTexCoord);
vVaryingFragColor = vec4(1.0,1.0,1.0,1.0);
};
the vertex array buffer for the position coordinates make a simple cube (with all coordinates a signed 0.25) while the modelview projection is just the inverse camera matrix (moved back by a factor of two) applied to a perspective matrix. However, even without the matrix transformation, I am unable to see anything onscreen. Originally, I had two different buffers that needed two different element index lists, but now both buffers (containing the vertex and texture coordinate data) are the same length and in order. The code itself is derived from the Durian Software Tutorial and the latest OpenGL Superbible. The rest of the code is here.
By this point, I have tried nearly everything I can think of. Is this code even remotely close? If so, why can't I get anything to render onscreen?
You're looking pretty good so far.
The only thing that I see right now is that you've got DEPTH_TEST enabled, but you don't clear the depth buffer. Even if the buffer initialized to a good value, you would be drawing empty scenes on every frame after the first one, because the depth buffer's not being cleared.
If that does not help, can you make sure that you have no glGetError() errors? You may have to clean up your unused texturing attributes/uniforms to get the errors to be clean, but that would be my next step.