OpenGL issue: cannot render geometry on screen - c++

My program was meant to draw a simple textured cube on screen, however, I cannot get it to render anything other than the clear color. This is my draw function:
void testRender() {
glClearColor(.25f, 0.35f, 0.15f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glUniformMatrix4fv(resources.uniforms.m4ModelViewProjection, 1, GL_FALSE, (const GLfloat*)resources.modelviewProjection.modelViewProjection);
glEnableVertexAttribArray(resources.attributes.vTexCoord);
glEnableVertexAttribArray(resources.attributes.vVertex);
//deal with vTexCoord first
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,resources.hiBuffer);
glBindBuffer(GL_ARRAY_BUFFER, resources.htcBuffer);
glVertexAttribPointer(resources.attributes.vTexCoord,2,GL_FLOAT,GL_FALSE,sizeof(GLfloat)*2,(void*)0);
//now the other one
glBindBuffer(GL_ARRAY_BUFFER,resources.hvBuffer);
glVertexAttribPointer(resources.attributes.vVertex,3,GL_FLOAT,GL_FALSE,sizeof(GLfloat)*3,(void*)0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, resources.htextures[0]);
glUniform1i(resources.uniforms.colorMap, 0);
glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_SHORT, (void*)0);
//clean up a bit
};
In addition, here is the vertex shader:
#version 330
in vec3 vVertex;
in vec2 vTexCoord;
uniform mat4 m4ModelViewProjection;
smooth out vec2 vVarryingTexCoord;
void main(void) {
vVarryingTexCoord = vTexCoord;
gl_Position = m4ModelViewProjection * vec4(vVertex, 1.0);
};
and the fragment shader (I have given up on textures for now):
#version 330
uniform sampler2D colorMap;
in vec2 vVarryingTexCoord;
out vec4 vVaryingFragColor;
void main(void) {
vVaryingFragColor = texture(colorMap, vVarryingTexCoord);
vVaryingFragColor = vec4(1.0,1.0,1.0,1.0);
};
the vertex array buffer for the position coordinates make a simple cube (with all coordinates a signed 0.25) while the modelview projection is just the inverse camera matrix (moved back by a factor of two) applied to a perspective matrix. However, even without the matrix transformation, I am unable to see anything onscreen. Originally, I had two different buffers that needed two different element index lists, but now both buffers (containing the vertex and texture coordinate data) are the same length and in order. The code itself is derived from the Durian Software Tutorial and the latest OpenGL Superbible. The rest of the code is here.
By this point, I have tried nearly everything I can think of. Is this code even remotely close? If so, why can't I get anything to render onscreen?

You're looking pretty good so far.
The only thing that I see right now is that you've got DEPTH_TEST enabled, but you don't clear the depth buffer. Even if the buffer initialized to a good value, you would be drawing empty scenes on every frame after the first one, because the depth buffer's not being cleared.
If that does not help, can you make sure that you have no glGetError() errors? You may have to clean up your unused texturing attributes/uniforms to get the errors to be clean, but that would be my next step.

Related

Reading texels after imageStore()

I'm modifying texels of a texture with imageStore() and after that i'm reading those texels in some other shader as sampler2D with texture() but i get the values which were stored in the texture before the imageStore(). With imageLoad() it works fine but i need to use filtering and the performance of texture() is better, so is there a way to get the modified data with texture()?
Edit:
First fragment shader(for writing):
#version 450 core
layout (binding = 0, rgba32f) uniform image2D img;
in vec2 vs_uv_out;
void main()
{
imageStore(img, ivec2(vs_uv_out), vec4(0.0f, 0.0f, 1.0f, 1.0f));
}
Second fragment shader(for reading):
#version 450 core
layout (binding = 0) uniform sampler2D tex;
in vec2 vs_uv_out;
out vec4 out_color;
void main()
{
out_color = texture(tex, vs_uv_out);
}
Thats how i run the shaders:
glUseProgram(shader_programs[0]);
glBindImageTexture(0, texture, 0, GL_FALSE, 0, GL_READ_WRITE,
GL_RGBA32F);
glDrawArrays(GL_TRIANGLES, 0, 6);
glUseProgram(shader_programs[1]);
glBindTextureUnit(0, texture);
glDrawArrays(GL_TRIANGLES, 0, 6);
i made this simple application to test that because the real one is very complex, i first clear the texture with red but the texels won't appear blue(except of using imageLoad in the second frag. shader).
Oh, that's easy then. Image Load/Store's writes uses an incoherent memory model, not the synchronous model most of the rest of OpenGL uses. As such, just because you write something with Image Load/Store doesn't mean it's visible to anyone else. You have to explicitly make it visible for reading.
You need a glMemoryBarrier call between the rendering operation that writes the data and the operation that reads it. And since the reading operation is a texture fetch, the correct barrier to use is GL_TEXTURE_FETCH_BARRIER_BIT.
And FYI: your imageLoad was able to read the written data only due to pure luck. Nothing guaranteed that it would be able to read the written data. To ensure such reads, you'd need a memory barrier as well. Though obviously a different one: GL_SHADER_IMAGE_ACCESS_BARRIER_BIT.
Also, texture takes normalized texture coordinates. imageStore takes integer pixel coordinates. Unless that texture is a rectangle texture (and it's not, since you used sampler2D), it is impossible to pass the exact same coordinate to both imageStore and texture.
Therefore, either your pixels are being written to the wrong location, or your texture is being sampled from the wrong location. Either way, there's a clear miscommunication. Assuming that vs_uv_out really is non-normalized, then you should either use texelFetch or you should normalize it. Fortunately, you're using OpenGL 4.5, so that ought to be fairly simple:
ivec2 size = textureSize(tex);
vec2 texCoord = vs_uv_out / size;
out_color = texture(tex, texCoord);

Trivial OpenGL Shader Storage Buffer Object (SSBO) not working

I am trying to figure out how SSBO works with a very basic example. The vertex shader:
#version 430
layout(location = 0) in vec2 Vertex;
void main() {
gl_Position = vec4(Vertex, 0.0, 1.0);
}
And the fragment shader:
#version 430
layout(std430, binding = 2) buffer ColorSSBO {
vec3 color;
};
void main() {
gl_FragColor = vec4(color, 1.0);
}
I know they work because if I replace vec4(color, 1.0) with vec4(1.0, 1.0, 1.0, 1.0) I see a white triangle in the center of the screen.
I initialize and bind the SSBO with the following code:
GLuint ssbo;
glGenBuffers(1, &ssbo);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, ssbo);
float color[] = {1.f, 1.f, 1.f};
glBufferData(GL_SHADER_STORAGE_BUFFER, 3*sizeof(float), color, GL_DYNAMIC_COPY);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 2, ssbo);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, 0);
What is wrong here?
My guess is that you are missing the SSBO binding before rendering. In your example, you are copying the content and then you bind it immediately, which is unnecessary for the declaration. In other words, the following line in your example:
...
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 2, ssbo);
...
Must be placed before rendering, such as:
...
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 2, ssbo);
/*
Your render calls and other bindings here.
*/
glBindBuffer(GL_SHADER_STORAGE_BUFFER, 0);
...
Without this, your shader (theoretically) will not be able to see the content.
In addition, as Andon M. Coleman has suggested, you have to use padding for your elements when declaring arrays (e.g., use vec4 instead of vec3). If you don't, it will apparently work but produce strange results because of this fact.
The following two links have helped me out understanding the meaning of an SSBO and how to handle them in your code:
https://www.khronos.org/opengl/wiki/Shader_Storage_Buffer_Object
http://www.geeks3d.com/20140704/tutorial-introduction-to-opengl-4-3-shader-storage-buffers-objects-ssbo-demo/
I hope this helps to anyone facing similar issues!
P.S.: I know this post is old, but I wanted to contribute.
When drawing a triangle, three points are necessary, and 3 separate sets of red green blue values are required for each point. You are only putting one set into the shader buffer. For the other two points, the value of color drops to the default, which is black (0.0,0.0,0.0). If you don't have blending enabled, it is likely that the triangle is being painted completely black because two of its vertices are black.
Try putting 2 more sets of red green blue values into the storage buffer to see it will load them as color values for the other two points.

Does the draw order affects objects position in depth? (images included)

I have a few objects on the scene and even if I specify that the object A have y= 10 (the highest object), from the TOP camera I can see the bottom objects through the object A. Here is an image from my scene.
And only today I found an interesting property that draw order of models matters, I may be wrong. Here is another image where I change the draw order of "ship1", attention: "ship1" is way bellow my scene, if I do ship1.draw(); first, the ship disappears (correct), but if I do ship1.draw(); in last, he appears on top (incorrect).
Video: Opengl Depth Problem video
Q1) Does the draw order always matter?
Q2) How do I fix this, should I change draw order every time I change camera position?
Edit: I also compared my class of Perspective Projection with the glm library, just to be sure that it´s not the problem with my projection Matrix. Everything is correct.
Edit1: I have my project on git: Arkanoid git repository (windows, project is ready to run at any computer with VS installed)
Edit2: I don´t use normals or texture. Just vertices and indices.
Edit3: Is there a problem, if every object on the scene uses(share) vertices from the same file ?
Edit4: I also changed my Perspective Projection values. I had near plane at 0.0f, now I have near=20.0f and far=500.0f, angle=60º. But nothing changes, view does but the depth not. =/
Edit5: Here is my Vertex and Fragment shaders.
Edit6: contact me any time, I am here all day, so ask me anything. At the moment I am rewriting all project from zero. I have two cubes which renders well, one in front of another. Already added mine class for: camera, projections, handler for shaders. Moving to Class which creates and draws objects.
// Vertex shader
in vec4 in_Position;
out vec4 color;
uniform mat4 Model;
uniform mat4 View;
uniform mat4 Projection;
void main(void)
{
color = in_Position;
gl_Position = Projection * View * Model * in_Position;
}
// Fragment shader
#version 330 core
in vec4 color;
out vec4 out_Color;
void main(void)
{
out_Color = color;
}
Some code:
void setupOpenGL() {
std::cerr << "CONTEXT: OpenGL v" << glGetString(GL_VERSION) << std::endl;
glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
glDepthMask(GL_TRUE);
glDepthRange(0.0, 1.0);
glClearDepth(1.0);
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
glFrontFace(GL_CCW);
}
void display()
{
++FrameCount;
glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
renderScene();
glutSwapBuffers();
}
void renderScene()
{
wallNorth.draw(shader);
obstacle1.draw(shader);
wallEast.draw(shader);
wallWest.draw(shader);
ship1.draw(shader);
plane.draw(shader);
}
I have cloned the repository you have linked to see if the issue was located somewhere else. In your most recent version the Object3D::draw function looks like this:
glBindVertexArray(this->vaoID);
glUseProgram(shader.getProgramID());
glUniformMatrix4fv(this->currentshader.getUniformID_Model(), 1, GL_TRUE, this->currentMatrix.getMatrix()); // PPmat é matriz identidade
glDrawElements(GL_TRIANGLES, 40, GL_UNSIGNED_INT, (GLvoid*)0);
glBindVertexArray(0);
glUseProgram(0);
glClear( GL_DEPTH_BUFFER_BIT); <<< clears the current depth buffer.
The last line clears the depth buffer after each object that is drawn, meaning that the next object drawn is not occluded properly. You should only clear the depth buffer once every frame.

OpenGL mapping texture to sphere

I have OpenGL program that I want to texture sphere with bitmap of earth. I prepared mesh in Blender and exported it to OBJ file. Program loads appropriate mesh data (vertices, uv and normals) and bitmap properly- I have checked it texturing cube with bone bitmap.
My program is texturing sphere, but incorrectly (or in the way I don't expect). Each triangle of this sphere includes deformed copy of this bitmap. I've checked bitmap and uv seems to be ok. I've tried many sizes of bitmap (powers of 2, multiples of 2 etc).
Here's the texture:
Screenshot of my program (like It would ignore my UV coords):
Mappings of UVs in Blender I've done in this way:
Code setting texture after loading it (apart from code adding texture to VBO- I think it's ok):
GLuint texID;
glGenTextures(1,&texID);
glBindTexture(GL_TEXTURE_2D,texID);
glTexImage2D(GL_TEXTURE_2D,0,GL_RGB,width,height,0,GL_BGR,GL_UNSIGNED_BYTE,(GLvoid*)&data[0]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP);
Is there needed any extra code to map this texture properly?
[Edit]
Initializing textures (earlier presented code is in LoadTextureBMP_custom() function)
bool Program::InitTextures(string texturePath)
{
textureID = LoadTextureBMP_custom(texturePath);
GLuint TBO_ID;
glGenBuffers(1,&TBO_ID);
glBindBuffer(GL_ARRAY_BUFFER,TBO_ID);
glBufferData(GL_ARRAY_BUFFER,uv.size()*sizeof(vec2),&uv[0],GL_STATIC_DRAW);
return true;
}
My main loop:
bool Program::MainLoop()
{
bool done = false;
mat4 projectionMatrix;
mat4 viewMatrix;
mat4 modelMatrix;
mat4 MVP;
Camera camera;
shader.SetShader(true);
while(!done)
{
if( (glfwGetKey(GLFW_KEY_ESC)))
done = true;
if(!glfwGetWindowParam(GLFW_OPENED))
done = true;
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Tutaj przeksztalcenia macierzy
camera.UpdateCamera();
modelMatrix = mat4(1.0f);
viewMatrix = camera.GetViewMatrix();
projectionMatrix = camera.GetProjectionMatrix();
MVP = projectionMatrix*viewMatrix*modelMatrix;
// Koniec przeksztalcen
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D,textureID);
shader.SetShaderParameters(MVP);
SetOpenGLScene(width,height);
glEnableVertexAttribArray(0); // Udostepnienie zmiennej Vertex Shadera => vertexPosition_modelspace
glBindBuffer(GL_ARRAY_BUFFER,VBO_ID);
glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,0,(void*)0);
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER,TBO_ID);
glVertexAttribPointer(1,2,GL_FLOAT,GL_FALSE,0,(void*)0);
glDrawArrays(GL_TRIANGLES,0,vert.size());
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glfwSwapBuffers();
}
shader.SetShader(false);
return true;
}
VS:
#version 330
layout(location = 0) in vec3 vertexPosition;
layout(location = 1) in vec2 vertexUV;
out vec2 UV;
uniform mat4 MVP;
void main()
{
vec4 v = vec4(vertexPosition,1.0f);
gl_Position = MVP*v;
UV = vertexUV;
}
FS:
#version 330
in vec2 UV;
out vec4 color;
uniform sampler2D texSampler; // Uchwyt tekstury
void main()
{
color = texture(texSampler, UV);
}
I haven't done any professional GL programming, but I've been working with 3D software quite a lot.
your UVs are most likely bad
your texture is a bad fit to project on a sphere
considering UVs are bad, you might want to check your normals as well
consider an ISOSPHERE instead of a regular one to make more efficient use of polygons
You are currently using a flat texture with flat mapping, which may give you very ugly results, since you will have very low resolution in the "outer" perimeter and most likely a nasty seam artifact where the two projections meet if you like... rotate the planet or something.
Note that you don't have to have any particular UV map, it just needs to be correct with the geometry, which it doesn't look like it is right now. The spherical mapping will take care for the rest. You could probably get away with a cylindrical map as well, since most Earth textures are in a suitable projection.
Finally, I've got the answer. Error was there:
bool Program::InitTextures(string texturePath)
{
textureID = LoadTextureBMP_custom(texturePath);
// GLuint TBO_ID; _ERROR_
glGenBuffers(1,&TBO_ID);
glBindBuffer(GL_ARRAY_BUFFER,TBO_ID);
glBufferData(GL_ARRAY_BUFFER,uv.size()*sizeof(vec2),&uv[0],GL_STATIC_DRAW);
}
There is the part of Program class declaration:
class Program
{
private:
Shader shader;
GLuint textureID;
GLuint VAO_ID;
GLuint VBO_ID;
GLuint TBO_ID; // Member covered with local variable declaration in InitTextures()
...
}
I've erroneously declared local TBO_ID that covered TBO_ID in class scope. UVs were generated with crummy precision and seams are horrible, but they weren't problem.
I have to admit that information I've supplied is too small to enable help. I should have put all the code of Program class. Thanks everybody who tried to.

Pointers on modern OpenGL shadow cubemapping?

Background
I am working on a 3D game using C++ and modern OpenGL (3.3). I am now working on the lighting and shadow rendering, and I've successfully implemented directional shadow mapping. After reading over the requirements for the game I have decided that I'd be needing point light shadow mapping. After doing some research, I discovered that to do omnidirectional shadow mapping I will do something similar to directional shadow mapping, but with a cubemap instead.
I have no previous knowledge of cubemaps but my understanding of them is that a cubemap is six textures, seamlessly attached.
I did some looking around but unfortunately I struggled to find a definitive "tutorial" on the subject for modern OpenGL. I look for tutorials first that explain it from start to finish because I seriously struggled to learn from snippets of source code or just concepts, but I tried.
Current understandings
Here is my general understanding of the idea, minus the technicalities. Please correct me.
For each point light, a framebuffer is set up, like directional shadowmapping
A single cubemap texture is then generated, and bound with glBindTexture(GL_TEXTURE_CUBE_MAP, shadowmap).
The cubemap is set up with the following attributes:
glTexParameteri(GL_TEXTURE_CUBE_MAP_ARB, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP_ARB, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP_ARB, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
(this is also similar to directional shadowmapping)
Now glTexImage2D() is iterated through six times, once for each face. I do that like this:
for (int face = 0; face < 6; face++) // Fill each face of the shadow cubemap
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, 0, GL_DEPTH_COMPONENT32F , 1024, 1024, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
The texture is attached to the framebuffer with a call to
glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, shadowmap, 0);
When the scene is to be rendered, it is rendered in two passes, like directional shadow mapping.
First of all, the shadow framebuffer is bound, the viewport is adjusted to the size of the shadowmap (1024 by 1024 in this case).
Culling is set to the front faces with glCullFace(GL_FRONT)
The active shader program is switched to the vertex and fragment shadow shaders that I will provide the sources of further down
The light view matrices for all six views are calculated. I do it by creating a vector of glm::mat4's and push_back() the matrices, like this:
// Create the six view matrices for all six sides
for (int i = 0; i < renderedObjects.size(); i++) // Iterate through all rendered objects
{
renderedObjects[i]->bindBuffers(); // Bind buffers for rendering with it
glm::mat4 depthModelMatrix = renderedObjects[i]->getModelMatrix(); // Set up model matrix
for (int i = 0; i < 6; i++) // Draw for each side of the light
{
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, shadowmap, 0);
glClear(GL_DEPTH_BUFFER_BIT); // Clear depth buffer
// Send MVP for shadow map
glm::mat4 depthMVP = depthProjectionMatrix * depthViewMatrices[i] * depthModelMatrix;
glUniformMatrix4fv(glGetUniformLocation(shadowMappingProgram, "depthMVP"), 1, GL_FALSE, glm::value_ptr(depthMVP));
glUniformMatrix4fv(glGetUniformLocation(shadowMappingProgram, "lightViewMatrix"), 1, GL_FALSE, glm::value_ptr(depthViewMatrices[i]));
glUniformMatrix4fv(glGetUniformLocation(shadowMappingProgram, "lightProjectionMatrix"), 1, GL_FALSE, glm::value_ptr(depthProjectionMatrix));
glDrawElements(renderedObjects[i]->getDrawType(), renderedObjects[i]->getElementSize(), GL_UNSIGNED_INT, 0);
}
}
The default framebuffer is bound, and the scene is drawn normally.
Issue
Now, to the shaders. This is where my understanding runs dry. I am completely unsure on what I should do, my research seems to conflict with eachother, because it's for different versions. I ended up blandly copying and pasting code from random sources, and hoping it'd achieve something other than a black screen. I know this is terrible, but there doesn't seem to be any clear definitions on what to do. What spaces do I work in? Do I even need a separate shadow shader, like I used in directional point lighting? What the hell do I use as the type for a shadow cubemap? samplerCube? samplerCubeShadow? How do I sample said cubemap properly? I hope that someone can clear it up for me and provide a nice explanation.
My current understanding of the shader part is:
- When the scene is being rendered into the cubemap, the vertex shader simply takes the depthMVP uniform I calculated in my C++ code and transforms the input vertices by them.
- The fragment shader of the cubemap pass simply assigns the single out value to the gl_FragCoord.z. (This part is unchanged from when I implemented directional shadow mapping. I assumed it would be the same for cubemapping because the shaders don't even interact with the cubemap - OpenGL simply renders the output from them to the cubemap, right? Because it's a framebuffer?)
The vertex shader for the normal rendering is unchanged.
In the fragment shader for normal rendering, the vertex position is transformed into the light's space with the light's projection and view matrix.
That's somehow used in the cubemap texture lookup. ???
Once the depth has been retrieved using magical means, it is compared to the distance of the light to the vertex, much like directional shadowmapping. If it's less, that point must be shadowed, and vice-versa.
It's not much of an understanding. I go blank as to how the vertices are transformed and used to lookup the cubemap, so I'm going to paste the source for my shaders, in hope that people can clarify this. Please note that a lot of this code is blind copying and pasting, I haven't altered anything as to not jeopardise any understanding.
Shadow vertex shader:
#version 150
in vec3 position;
uniform mat4 depthMVP;
void main()
{
gl_Position = depthMVP * vec4(position, 1);
}
Shadow fragment shader:
#version 150
out float fragmentDepth;
void main()
{
fragmentDepth = gl_FragCoord.z;
}
Standard vertex shader:
#version 150
in vec3 position;
in vec3 normal;
in vec2 texcoord;
uniform mat3 modelInverseTranspose;
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
out vec3 fragnormal;
out vec3 fragnormaldirection;
out vec2 fragtexcoord;
out vec4 fragposition;
out vec4 fragshadowcoord;
void main()
{
fragposition = modelMatrix * vec4(position, 1.0);
fragtexcoord = texcoord;
fragnormaldirection = normalize(modelInverseTranspose * normal);
fragnormal = normalize(normal);
fragshadowcoord = projectionMatrix * viewMatrix * modelMatrix * vec4(position, 1.0);
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(position, 1.0);
}
Standard fragment shader:
#version 150
out vec4 outColour;
in vec3 fragnormaldirection;
in vec2 fragtexcoord;
in vec3 fragnormal;
in vec4 fragposition;
in vec4 fragshadowcoord;
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
uniform mat4 viewMatrixInversed;
uniform mat4 lightViewMatrix;
uniform mat4 lightProjectionMatrix;
uniform sampler2D tex;
uniform samplerCubeShadow shadowmap;
float VectorToDepthValue(vec3 Vec)
{
vec3 AbsVec = abs(Vec);
float LocalZcomp = max(AbsVec.x, max(AbsVec.y, AbsVec.z));
const float f = 2048.0;
const float n = 1.0;
float NormZComp = (f+n) / (f-n) - (2*f*n)/(f-n)/LocalZcomp;
return (NormZComp + 1.0) * 0.5;
}
float ComputeShadowFactor(samplerCubeShadow ShadowCubeMap, vec3 VertToLightWS)
{
float ShadowVec = texture(ShadowCubeMap, vec4(VertToLightWS, 1.0));
if (ShadowVec + 0.0001 > VectorToDepthValue(VertToLightWS)) // To avoid self shadowing, I guess
return 1.0;
return 0.7;
}
void main()
{
vec3 light_position = vec3(0.0, 0.0, 0.0);
vec3 VertToLightWS = light_position - fragposition.xyz;
outColour = texture(tex, fragtexcoord) * ComputeShadowFactor(shadowmap, VertToLightWS);
}
I can't remember where the ComputerShadowFactor and VectorToDepthValue function code came from, because I was researching it on my laptop which I can't get to right now, but this is the result of those shaders:
It is a small square of unshadowed space surrounded by shadowed space.
I am obviously doing a lot wrong here, probably centered on my shaders, due to a lack of knowledge on the subject because I find it difficult to learn from anything but tutorials, and I am very sorry for that. I am at a loss it it would be wonderful if someone can shed light on this with a clear explanation on what I am doing wrong, why it's wrong, how I can fix it and maybe even some code. I think the issue may be because I am working in the wrong spaces.
I hope to provide an answer to some of your questions, but first some definitions are required:
What is a cubemap?
It is a map from a direction vector to a pair of [face, 2d coordinates on that face], obtained by projecting the direction vector on an hypothetical cube.
What is an OpenGL cubemap texture?
It is a set of six "images".
What is a GLSL cubemap sampler?
It is a sampler primitive from which cubemap sampling can be done. This mean that it is sampled using a direction vector instead of the usual texture coordinates. The hardware then project the direction vector on an hypothetical cube and use the resulting [face, 2d texture coordinate] pair to sample the right "image" at the right 2d position.
What is a GLSL shadow sampler?
It is a sampler primitive that is bounded to a texture containing NDC-space depth values and, when sampled using the shadow-specific sampling functions, return a "comparison" between a NDC-space depth (in the same space of the shadow map, obviously) and the NDC-space depth stored inside the bounded texture. The depth to compare against is specified as an additional element in the texture coordinates when calling the sampling function. Note that shadow samplers are provided for ease of use and speed, but it is always possible to do the comparison "manually" in the shader.
Now, for your questions:
OpenGL simply renders [...] to the cubemap, right?
No, OpenGL render to a set of targets in the currently bounded framebuffer.
In the case of cubemaps, the usual way to render in them is:
to create them and attach each of their six "images" to the same
framebuffer (at different attachment points, obviously)
to enable only one of the target at a time (so, you render in each cubemap face individually)
to render what you want in the cubemap face (possibly using face-specific "view" and "projection" matrices)
Point-light shadow maps
In addition to everything said about cubemaps, there are a number of problems in using them to implement point-light shadow mapping and so the hardware depth comparison is rarely used.
Instead, what is common pratice is the following:
instead of writing NDC-space depth, write radial distance from the
point light
when querying the shadow map (see sample code at bottom):
do not use hardware depth comparisons (use samplerCube instead of samplerCubeShadow)
transform the point to be tested in the "cube space" (that do not include projection at all)
use the "cube-space" vector as the lookup direction to sample the cubemap
compare the radial distance sampled from the cubemap with the radial distance of the tested point
Sample code
// sample radial distance from the cubemap
float radial_dist = texture(my_cubemap, cube_space_vector).x;
// compare against test point radial distance
bool shadowed = length(cube_space_vector) > radial_dist;