OpenGL: Retrieving floating point computation results from the GPU - opengl

In a nutshell
In a single shader pass I'm applying a couple of different model transformation matrices in the Vertex Shader and writing the results into different position vectors.
Then I do some simple arithmetic with the different results in the Vertex Shader.
//Vertex Shader
in layout(location=0) vec3 position;
in layout(location=1) vec3 vertexColor;
...
out vec3 result1;
out vec3 result2;
out vec3 result3;
out vec3 color;
void main()
{
gl_Position = transformationMatrix * vec4(position, 1.0);
vec4 pos1 = transformationMatrix1 * vec4(position, 1.0);
vec4 pos2 = transformationMatrix2 * vec4(position, 1.0);
...
result1 = pos1.xyz * pos2.xyz / 0.012313879834;
result2 = (pos2.xyz + pos1.xyz) * 1.5;
result3 = ....;
color = vertexColor;
}
The results of that math I want to pass through the Fragment Shader (so the values are interpolated nicely like colors are) ...
// Fragment shader
in vec3 color;
in vec3 result1;
in vec3 result2;
in vec3 result3;
layout(location = 0) out vec4 theColor;
layout(location = 1) out vec3 output1;
layout(location = 2) out vec3 output2;
layout(location = 3) out vec3 output3;
void main()
{
theColor = vec4(color, 1.0);
output1 = result1;
output2 = result2;
}
... to finally read them back, so that I can continue working with the data on the CPU. I need the data read back to be precise (float 32) and optimally not normalized to [0, 1].
Regarding this I have a couple of questions:
Initially I thought it would be possible to facilitate this using the GL_COLOR_ATTACHMENTi, but I haven't been able to figure out how. Is it possible? If so, how would I go about it?
How would a solution using the Image Load/Store functionality since OpenGL 4.2 look like? Are there potential pitfalls that I need to be aware of?
EDIT: I got it to work with the Color Attachments after all. See below for a solution that worked for me.

Found a solution to work with the Color Attachments after all. The client-side code looks like the following, the Shaders didn't need any change iirc:
void initializeGL()
{
....
// Generate textures.
glGenTextures(3, textures.data());
for (GLuint id : textureIds)
{
// Bind texture and specify size and nature ...
glBindTexture(GL_TEXTURE_2D, id);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB32F, m_width, m_height, 0, GL_RGB, GL_FLOAT, nullptr);
}
glBindTexture(GL_TEXTURE_2D, 0);
....
// Attach texture images to Framebuffer.
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, otherTextureId, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, textureIds[0], 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT2, GL_TEXTURE_2D, textureIds[1], 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT3, GL_TEXTURE_2D, textureIds[2], 0);
// Specify color buffers that will be written to.
const GLenum buffers[] = {GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2, GL_COLOR_ATTACHMENT3};
glDrawBuffers(4, buffers);
....
}
void renderGL()
{
....
// Copy images into corresponding buffers in the main memory.
glReadBuffer(GL_COLOR_ATTACHMENT1);
glReadPixels(0, 0, m_width, m_height, GL_RGB, GL_FLOAT, texture1.data());
glReadBuffer(GL_COLOR_ATTACHMENT2);
glReadPixels(0, 0, m_width, m_height, GL_RGB, GL_FLOAT, texture2.data());
glReadBuffer(GL_COLOR_ATTACHMENT3);
glReadPixels(0, 0, m_width, m_height, GL_RGB, GL_FLOAT, texture3.data());
....
}
Any feedback would still be appreciated.

Related

How to bind a GL_TEXTURE_2D_ARRAY to a framebuffer on GL_COLOR_ATTACHMENT1?

Edit : see #Rabbid76 answer, question was not truly related to GL_TEXTURE_2D_ARRAY, only to framebuffer color attachment activation!
I'm having trouble updating a shader that use to ouput into a single texture to multiple textures.
Here's the simplified code, I'll put all I find relevant, feel free to ask for other parts of the code if they are important.
glGenTexture(1, tex);
glBindTexture(GL_TEXTURE_2D_ARRAY, tex);
glTexStorage3D(GL_TEXTURE_2D_ARRAY, 1, GL_RGBA32F, x, y, 2);
glGenFramebuffers(1, &FBO);
glBindFramebuffer(GL_FRAMEBUFFER, FBO);
glFramebufferTextureLayer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, tex, 0, 0);
glFramebufferTextureLayer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, tex, 0, 1);
frag shader that writes :
#version 330
layout(location = 0) out vec4 col1;
layout(location = 1) out vec4 col2;
int main()
{
col1 = vec4(1.0);
col2 = vec4(2.0);
}
other shader that uses the result :
#version 330 core
uniform sampler2DArray tex;
in vec2 Coord;
int main()
{
vec4 val = texture(tex, vec3(Coord, 0));
val += texture(tex, vec3(Coord, 1));
}
The problem is that col1 is well written in layout 0 (regardless of texture layer, glFramebufferTextureLayer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, texDefo, 0, 1); works as well), but I don't get to write in layout 1 (GL_COLOR_ATTACHMENT1).
Am I missing something here ?
Extensive tests :
From what I could narrow it down to, it really looks like layout(location = 1) doesn't work how I would have expected. With the code provided I get
val = col1 when layout(location = 0) out vec4 col1;
val = col2 when layout(location = 0) out vec4 col2;
both regarless which texture layer I did bind the GL_COLOR_ATTACHMENTi, ie both layers seems to work in the GL_TEXTURE_2D_ARRAY.
You need to specify the buffers to be drawn into with glDrawBuffers:
GLenum drawBuffers[]{ GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1 };
glDrawBuffers(2, drawBuffers);

Deferred Rendering not Displaying the GBuffer Textures

I'm trying to implement deferred rendering within an engine I'm developing as a personal learning, and I cannot get to understand what I'm doing wrong when it comes to render all the textures in the GBuffer to check if the implementation is okay.
The thing is that I currently have a framebuffer with 3 color attachments for the different textures of the GBuffer (color, normal and position), which I initialize as follows:
glCreateFramebuffers(1, &id);
glBindFramebuffer(GL_FRAMEBUFFER, id);
std::vector<uint> textures;
textures.resize(3);
glCreateTextures(GL_TEXTURE_2D, 3, textures.data());
for(size_t i = 0; i < 3; ++i)
{
glBindTexture(GL_TEXTURE_2D, textures[i]);
if(i == 0) // For Color Buffer
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, nullptr);
else
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, width, height, 0, GL_RGBA, GL_FLOAT, nullptr);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, GL_TEXTURE_2D, textures[i], 0);
}
GLenum color_buffers[3] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2 };
glDrawBuffers((GLsizei)textures.size(), color_buffers);
uint depth_texture;
glCreateTextures(GL_TEXTURE_2D, 1, &depth_texture);
glBindTexture(GL_TEXTURE_2D, depth_texture);
glTexStorage2D(GL_TEXTURE_2D, 1, GL_DEPTH24_STENCIL8, width, height);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_TEXTURE_2D, depth_texture, 0);
bool fbo_status = glCheckFramebufferStatus(GL_FRAMEBUFFER) == GL_FRAMEBUFFER_COMPLETE;
ASSERT(fbo_status, "Framebuffer Incompleted!");
glBindFramebuffer(GL_FRAMEBUFFER, 0);
This is not reporting any errors and it seems to work since the framebuffer of the forward renderer renders properly. Then, when rendering, I run the next code after binding the framebuffer and clearing the color and depth buffers:
camera_buffer->Bind();
camera_buffer->SetData("ViewProjection", glm::value_ptr(viewproj_mat));
camera_buffer->SetData("CamPosition", glm::value_ptr(glm::vec4(view_position, 0.0f)));
camera_buffer->Unbind();
for(Entity& entity : scene_entities)
{
shader->Bind();
Texture* texture = entity.GetTexture();
BindTexture(0, texture);
shader->SetUniformMat4("u_Model", entity.transform);
shader->SetUniformInt("u_Albedo", 0);
shader->SetUniformVec4("u_Material.AlbedoColor", entity->AlbedoColor);
shader->SetUniformFloat("u_Material.Smoothness", entity->Smoothness);
glBindVertexArray(entity.VertexArray);
glDrawElements(GL_TRIANGLES, entity.VertexArray.index_buffer.count, GL_UNSIGNED_INT, nullptr);
// Shader, VArray and Textures Unbindings
}
So with this code I manage to render the 3 textures created by using the ImGui::Image function, by switching the texture index between 0, 1 or 2 as the next:
ImGui::Image((ImTextureID)(fbo->textures[0]), viewport_size, ImVec2(0, 1), ImVec2(1, 0));
Now, the color texture (at index 0) works perfectly, as the next image shows:
But when rendering the normals and position textures (indexes 2 and 3), I have no result:
Does anybody sees what I'm doing wrong? Because I've been hours and hours with this and I cannot see it. I ran this on RenderDoc and I couldn't see anything wrong, the textures displayed in RenderDoc are the same than in the engine.
The vertex shader I use when rendering the entities is the next:
layout(location = 0) in vec3 a_Position;
layout(location = 1) in vec2 a_TexCoord;
layout(location = 2) in vec3 a_Normal;
out IBlock
{
vec2 TexCoord;
vec3 FragPos;
vec3 Normal;
} v_VertexData;
layout(std140, binding = 0) uniform ub_CameraData
{
mat4 ViewProjection;
vec3 CamPosition;
};
uniform mat4 u_ViewProjection = mat4(1.0);
uniform mat4 u_Model = mat4(1.0);
void main()
{
vec4 world_pos = u_Model * vec4(a_Position, 1.0);
v_VertexData.TexCoord = a_TexCoord;
v_VertexData.FragPos = world_pos.xyz;
v_VertexData.Normal = transpose(inverse(mat3(u_Model))) * a_Normal;
gl_Position = ViewProjection * u_Model * vec4(a_Position, 1.0);
}
And the fragment one is the next, they are both pretty simple:
layout(location = 0) out vec4 gBuff_Color;
layout(location = 1) out vec3 gBuff_Normal;
layout(location = 2) out vec3 gBuff_Position;
in IBlock
{
vec2 TexCoord;
vec3 FragPos;
vec3 Normal;
} v_VertexData;
struct Material
{
float Smoothness;
vec4 AlbedoColor;
};
uniform Material u_Material = Material(1.0, vec4(1.0));
uniform sampler2D u_Albedo, u_Normal;
void main()
{
gBuff_Color = texture(u_Albedo, v_VertexData.TexCoord) * u_Material.AlbedoColor;
gBuff_Normal = normalize(v_VertexData.Normal);
gBuff_Position = v_VertexData.FragPos;
}
It is not clear from the question what exactly might be happening here, as lots of GL states - both at the time the rendering to the gbuffer, and at that time the gbuffer texture is rendered for visualization - are just unknown. However, from the images given in the question, one can not conclude that the actual color output for attachments 1 and 2 is not working.
One issue which comes to mind is alpha blending. The color values processed by the per-fragment operations after the vertex shader are always working with RGBA values - although the value of the A channel only matters if you enabled blending and use a blend function which somehow depends on the source alpha.
If you declare a custom fragment shader output as float, vec2, vec3, the remaining components stay undefined (undefined value, not undefined behavior). This does not impose a problem unless some other operations you do depend on those values.
What we also have here is a GL_RGBA16F output format (which is the right choice, because none of the 3-component RGB formats are required as color-renderable by the spec).
What might happen here is either:
Alpha blending is already turned on during rendering into the g-buffer. The fragment shader's alpha output happens to be zero, so that it appears as 100% transparent and the contents of the texture are not changed.
Alpha blending is not used during rendering into the g-buffer, so the correct contents end up in the texture, the alpha channel just happens to end up with all zeros. Now the texture might be visualized with alpha blending enbaled, ending up in a 100% transparent view.
If it is the first option, turn off blending when rendering the into the g-buffer. It would not work with deferred shading anyway. You might still run into the second option then.
If this is the second option, there is no issue at all - the lighting passes which follow will read the data they need (and ultimately, you will want to put useful information into the alpha channel to not waste it and be able to reduce the number of attachments). It is just your visualization (which I assume is for debug purposed only) is wrong. You can try to fix the visualization.
As a side note: Storing the world space position in the G-Buffer is a huge waste of bandwidth. All you need to be able to reconstruct the world space position is the depth value and the inverse of your view and projection matrices. Also storing world space position in GL_RGB16F will very easily run into precision issues if you move your camera away from world space origin.

Shadow Map: whole mesh is in shadow, there is no light where it should be according to depth map

First time trying to implement shadow map using openGL ang glsl shader language.
I think the first pass where I render to a texture is correct but when I compare the depth values it seems to shadow everything.
https://www.dropbox.com/s/myxenx9y41yz2fc/Screenshot%202014-12-09%2012.18.53.png?dl=0
My perspective projection matrix looks like this:
FOV = 90
Aspect = According to the programs window size. (I also tried to put different values here)
Near = 2;
Far= 10000;
Function to initialize the frame buffer
void OpenGLWin::initDepthMap()
{
//Framebuffer
m_glFunctions->glGenFramebuffers(1, &m_frameBuffer);
m_glFunctions->glBindFramebuffer(GL_FRAMEBUFFER, m_frameBuffer);
//////////////////////////////////////////////////////////////////////////
//Texture to render scene to
m_glFunctions->glGenTextures(1, &m_renderToTexture);
//Bind created texture to make it current
m_glFunctions->glBindTexture(GL_TEXTURE_2D, m_renderToTexture);
//Creates an empty texture of specified size.
//m_glFunctions->glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 1024, 768, 0, GL_RGB, GL_UNSIGNED_BYTE, 0);
m_glFunctions->glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, 1024, 1024, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0);
m_glFunctions->glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
m_glFunctions->glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
m_glFunctions->glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
m_glFunctions->glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
m_glFunctions->glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL);
m_glFunctions->glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_R_TO_TEXTURE);
m_glFunctions->glDrawBuffer(GL_NONE);
m_glFunctions->glReadBuffer(GL_NONE);
// Always check that our framebuffer is ok
if (m_glFunctions->glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE){
qDebug() << "FrameBuffer not OK";
return;
}
m_glFunctions->glBindFramebuffer(GL_FRAMEBUFFER, 0);
}
Draw function for each mesh. Model matrix is passed as argument from a Transform class draw function
void Mesh::draw(const Matrix4x4& projection, const Matrix4x4& view, const Matrix4x4& model)
{
//Shadow map pass 1
if (m_shadowMapFirstpass){
//Pass 1 Shaders
m_glFunctions->glUseProgram(m_depthRTTShaderProgram);
//Light view matrix
m_depthMVP = projection*view*model;
//Get the location of the uniform name mvp
GLuint depthMVPLocation = m_glFunctions->glGetUniformLocation(m_depthRTTShaderProgram, "depthMVP");
m_glFunctions->glUniformMatrix4fv(depthMVPLocation, 1, GL_TRUE, &m_depthMVP[0][0]);
m_shadowMapFirstpass = false;
}
//Shadow map pass 2
else if(m_shadowMapFirstpass == false){
//Pass 2 Shader
m_glFunctions->glUseProgram(m_shaderProgram);
//Gets the model matrix which is then multiplied with view and projection to form the mvp matrix
Matrix4x4 mvp = projection * view * model;
//Get the location of the uniform name mvp
GLuint mvpLocation = m_glFunctions->glGetUniformLocation(m_shaderProgram, "mvp");
//Send the mvp matrix to the vertex shader
m_glFunctions->glUniformMatrix4fv(mvpLocation, 1, GL_TRUE, &mvp[0][0]);
Matrix4x4 depthBiasMVP = m_depthMVP;// biasMatrix*m_depthMVP;
GLuint depthBiasMVPLocation = m_glFunctions->glGetUniformLocation(m_shaderProgram, "depthBiasMVP");
m_glFunctions->glUniformMatrix4fv(depthBiasMVPLocation, 1, GL_TRUE, &depthBiasMVP[0][0]);
m_shadowMapFirstpass = true;
}
//Bind this mesh VAO
m_glFunctions->glBindVertexArray(m_vao);
//Draw the triangles using the index buffer(EBO)
glDrawElements(GL_TRIANGLES, m_indices.size(), GL_UNSIGNED_INT, 0);
//Unbind the VAO
m_glFunctions->glBindVertexArray(0);
/////////////////////////////////////////////////////////////////////////////////////////////////////
//Calls the childrens' update
if (!m_children.empty())
{
for (int i = 0; i < m_children.size(); i++)
{
if (m_children[i] != NULL)
{
m_children[i]->draw(frustumCheck, projection, view, bvScaleFactor, model);
}
}
}
}
My render loop
void OpenGLWin::paintGL()
{
// m_glFunctions->glBindFramebuffer(GL_FRAMEBUFFER, m_frameBuffer);
m_glFunctions->glBindFramebuffer(GL_DRAW_FRAMEBUFFER, m_frameBuffer);
glViewport(0, 0, 1024, 1024);
// Clear the buffer with the current clearing color
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//Light View Matrix
Matrix4x4 lightView;
lightView.lookAt(Vector3(0, 0, 0), Vector3(0, 0, -1), Vector3(0, 1, 0));
//Draw scene to Texture
m_root->draw(m_projection, lightView);
///////////////////////////////////////////////////////////////////
//Draw to real scene
m_glFunctions->glBindFramebuffer(GL_FRAMEBUFFER, 0);
// m_glFunctions->glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
// Clear the screen
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//Bind Pass 2 shader
m_glFunctions->glUseProgram(m_shadowMapShaderProgram->getShaderProgramID());
GLuint shadowMapLocation = m_glFunctions->glGetUniformLocation(m_shadowMapShaderProgram->getShaderProgramID(), "shadowMap");
//Shadow Texture
m_glFunctions->glActiveTexture(GL_TEXTURE0);
m_glFunctions->glBindTexture(GL_TEXTURE_2D, m_renderToTexture);
m_glFunctions->glUniform1i(shadowMapLocation, 0);
//Updates matrices and view matrix for player camera
m_root->update(m_view);
//Render scene to main frame buffer
m_root->draw(m_projection, m_view);
}
Pass 1 Vertex Shader
#version 330 core
//Passthrough vertex shader
uniform mat4 depthMVP;
//Vertex received from the program
layout(location = 0) in vec3 vertexPosition_modelspace;
void main(void)
{
//Output position of vertex in clip space
gl_Position = depthMVP * vec4(vertexPosition_modelspace, 1);
}
Pass 1 Fragment Shader
#version 330 core
//Render to texture
// Ouput data
layout(location = 0) out float depthValue;
void main(void)
{
depthValue = gl_FragCoord.z;
}
Pass 2 Vertex Shader
#version 330 core
layout(location = 0) in vec3 vertexPosition_modelspace;
out vec4 ShadowCoord;
// Values that stay constant for the whole mesh.
uniform mat4 mvp;
uniform mat4 depthBiasMVP;
void main(){
// Output position of the vertex, in clip space : MVP * position
gl_Position = mvp * vec4(vertexPosition_modelspace,1);
ShadowCoord = depthBiasMVP * vec4(vertexPosition_modelspace,1);
}
Pass 2 Fragment Shader
#version 330 core
in vec4 ShadowCoord;
// Ouput data
layout(location = 0) out vec3 color;
// Values that stay constant for the whole mesh.
uniform sampler2D shadowMap;
void main(){
float visibility=1.0;
vec3 ProjCoords = ShadowCoord.xyz / ShadowCoord.w;
vec2 UVCoords;
UVCoords.x = 0.5 * ProjCoords.x + 0.5;
UVCoords.y = 0.5 * ProjCoords.y + 0.5;
float z = 0.5 * ProjCoords.z + 0.5;
float Depth = texture(shadowMap, UVCoords).z;//or x
if (Depth < (z + 0.00001)){
visibility = 0.1;
}
color = visibility*vec3(1,0,0);
}
Disable texture comparison for one thing. That's only valid when used with sampler2DShadow and you clearly are not using that in your code because your texture coordinates are 2D.
This means replacing the following code:
m_glFunctions->glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL);
m_glFunctions->glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_R_TO_TEXTURE);
With this instead:
m_glFunctions->glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_NONE);
Likewise, using GL_LINEAR filtering on a non-sampler2DShadow texture is a bad idea. That is going to average the 4 nearest depth values and give you a single depth back. But that's not the proper way to anti-alias shadows; you actually want to average the result of 4 depth tests instead of doing a single test on the average of 4 depths.

Incorrect texture coordinate calculated for shadow map

I am having problems getting the correct texture coordinate to sample my shadow map. Looking at my code, the problem appears to be from incorrect matrices. This is the fragment shader for the rendering pass where I do shadows:
in vec2 st;
uniform sampler2D colorTexture;
uniform sampler2D normalTexture;
uniform sampler2D depthTexture;
uniform sampler2D shadowmapTexture;
uniform mat4 invProj;
uniform mat4 lightProj;
uniform vec3 lightPosition;
out vec3 color;
void main () {
vec3 clipSpaceCoords;
clipSpaceCoords.xy = st.xy * 2.0 - 1.0;
clipSpaceCoords.z = texture(depthTexture, st).x * 2.0 - 1.0;
vec4 position = invProj * vec4(clipSpaceCoords,1.0);
position.xyz /= position.w;
//At this point, position.xyz seems to be what it should be, the world space coordinates of the pixel. I know this because it works for lighting calculations.
vec4 lightSpace = lightProj * vec4(position.xyz,1.0);
//This line above is where I think things go wrong.
lightSpace.xyz /= lightSpace.w;
lightSpace.xyz = lightSpace.xyz * 0.5 + 0.5;
float lightDepth = texture(shadowmapTexture, lightSpace.xy).x;
//Right here lightDepth seems to be incorrect. The only explanation I can think of for this is if there is a problem in the above calculations leading to lightSpace.xy.
float shadowFactor = 1.0;
if(lightSpace.z > lightDepth+0.0005) {
shadowFactor = 0.2;
}
color = vec3(lightDepth);
}
I have removed all the code irrelevant to shadowing from this shader (Lighting, etc). This is the code I use to render the final pass:
glCullFace(GL_BACK);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
postShader->UseShader();
postShader->SetUniform1I("colorTexture", 0);
postShader->SetUniform1I("normalTexture", 1);
postShader->SetUniform1I("depthTexture", 2);
postShader->SetUniform1I("shadowmapTexture", 3);
//glm::vec3 cp = camera->GetPosition();
postShader->SetUniform4FV("invProj", glm::inverse(camera->GetCombinedProjectionView()));
postShader->SetUniform4FV("lightProj", lights[0].camera->GetCombinedProjectionView());
//Again, if I had to guess, these two lines above would be part of the problem.
postShader->SetUniform3F("lightPosition", lights[0].x, lights[0].y, lights[0].z);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, frameBuffer->GetColor());
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, frameBuffer->GetNormals());
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, frameBuffer->GetDepth());
glActiveTexture(GL_TEXTURE3);
glBindTexture(GL_TEXTURE_2D, lights[0].shadowmap->GetDepth());
this->BindPPQuad();
glDrawArrays(GL_TRIANGLES, 0, 6);
In case it is relevant to my problem, here is how I generate the depth framebuffer attachments for the depth and shadow maps:
void FrameBuffer::Init(int textureWidth, int textureHeight) {
glGenFramebuffers(1, &fbo);
glGenTextures(1, &depth);
glBindTexture(GL_TEXTURE_2D, depth);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, textureWidth, textureHeight, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depth, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
}
Where is the problem in my math or my code, and what can I do to fix it?
After some experimentation, I have found that my problem does not lie in my matrices, but in my clamping. It seems that I get strange values when I use GL_CLAMP or GL_CLAMP_TO_EDGE, but I get almost correct values when I use GL_CLAMP_TO_BORDER. There are more problems, but they do not seem to be matrix related as I thought.

Textures appear pure black, unless I pass an invalid value to texture uniform location

My program has been working perfectly so far, but it turns out that I've been lucky. I began doing some cleanup of the shader, because it was full of experimental stuff, and I had the following line at the end of the fragment shader:
gl_FragColor = final_color * (texture2D(tex, gl_TexCoord[0].st)*1.0 + texture2D(tex2, gl_TexCoord[0].st)*1.0);
I attempted to clean it up and I had the following declared at the top:
uniform sampler2D tex, tex2;
Changing these lines to:
gl_FragColor = final_color * texture2D(tex, gl_TexCoord[0].st;
and
uniform sampler2D tex;
actually broke the program (black screen), even though I am doing
GLuint tex_loc = glGetUniformLocation(shader_prog_id_, "tex");
glUniform1i(tex_loc, texture_id_);
in my main code. I'm sure it's a texture issue and not a glsl compiler error, because I can add 1.0 to the output and end up with a white silhouette of my mesh.
The strangeness begins when I change the lines in my shader to:
gl_FragColor = final_color * texture2D(tex2, gl_TexCoord[0].st;
and
uniform sampler2D tex2;
but still retrieve the location for tex. The program works as it always has, even though inspecting the value of tex_loc in the debugger indicates an error. I'm not happy doing this, and now that I'm trying to load multiple textures, it will cause bigger headaches down the line.
I'm using VBOs, in interleaved format, to render the geometry. I'm passing in the vertex position, normal and texcoord this way.
There are other questions with the "black texture" issue, but they're using immediate mode calls and setting the wrong texture state. I tried changing the texture unit before supplying the arrays, with no success.
Here is as much relevant code as possible from the main program:
void MeshWidget::draw() {
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0.0f, 0.0f, -4.0f + zoom_factor_);
glRotatef(rotX, 1.0f, 0.0f, 0.0f);
glRotatef(rotY, 0.0f, 1.0f, 0.0f);
glRotatef(rotZ, 0.0f, 0.0f, 1.0f);
// Auto centre mesh based on vertex bounds.
glTranslatef(-x_mid_, -y_mid_, -z_mid_);
glDrawElements(GL_TRIANGLES, mesh_.num_indices, GL_UNSIGNED_SHORT, BUFFER_OFFSET(0)); //The starting point of the IBO
}
void MeshWidget::openMesh(const string& filename) {
if (mesh_filename_ != filename) {
clearMeshData(mesh_);
glDeleteBuffersARB(1, &VertexVBOID);
glDeleteBuffersARB(1, &IndexVBOID);
ReadMsh(mesh_, filename);
// Create buffer objects here.
glGenBuffersARB(1, &VertexVBOID);
glBindBufferARB(GL_ARRAY_BUFFER, VertexVBOID);
glBufferDataARB(GL_ARRAY_BUFFER, sizeof(VertexAttributes)*mesh_.num_vertices, &mesh_.vertices[0], GL_STATIC_DRAW);
glGenBuffersARB(1, &IndexVBOID);
glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER, IndexVBOID);
glBufferDataARB(GL_ELEMENT_ARRAY_BUFFER, sizeof(uint16_t)*mesh_.num_indices, &mesh_.indices[0], GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, VertexVBOID);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, sizeof(VertexAttributes), BUFFER_OFFSET(0)); //The starting point of the VBO, for the vertices
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(GL_FLOAT, sizeof(VertexAttributes), BUFFER_OFFSET(12)); //The starting point of normals, 12 bytes away
glClientActiveTexture(GL_TEXTURE0);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glTexCoordPointer(2, GL_FLOAT, sizeof(VertexAttributes), BUFFER_OFFSET(24)); //The starting point of texcoords, 24 bytes away
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IndexVBOID);
if (startup_done_) updateGL();
}
}
void MeshWidget::openTexture(const string& filename) {
size_t dot = filename.find_last_of('.');
string ext(filename, dot, filename.size()); // 3rd parameter should be length of new string, but is internally clipped to end.
glActiveTexture(GL_TEXTURE0);
glClientActiveTexture(GL_TEXTURE0);
glDeleteTextures(1, &texture_id_);
glGenTextures(1, &texture_id_);
glBindTexture(GL_TEXTURE_2D, texture_id_);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
if (ext == ".dds") {
if (GLEE_EXT_texture_compression_s3tc) {
texture_id_ = SOIL_load_OGL_texture(filename.c_str(), SOIL_LOAD_AUTO, texture_id_, SOIL_FLAG_DDS_LOAD_DIRECT);
// SOIL takes care of calling glTexParams, glTexImage2D, etc.
yflip_texture_ = true;
} else {
//std::cout << "S3TC not supported on this graphics hardware." << std::endl;
// TODO: Error message in status bar?
}
} else {
QImage tex(filename.c_str());
tex = QGLWidget::convertToGLFormat(tex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, tex.width(), tex.height(), 0, GL_RGBA, GL_UNSIGNED_BYTE, tex.bits());
yflip_texture_ = false;
}
updateUniforms();
if (startup_done_) updateGL();
}
void MeshWidget::updateUniforms() {
GLuint texture_flip_uniform = glGetUniformLocation(shader_prog_id_, "yflip");
glUniform1f(texture_flip_uniform, float(yflip_texture_ * 1.0f));
GLuint tex_loc = glGetUniformLocation(shader_prog_id_, "tex");
glUniform1i(tex_loc, texture_id_);
}
And my shaders (there's still some junk in here because I was experimenting, but nothing that affects the output):
varying vec3 normal, lightDir, eyeVec;
uniform float yflip;
void main()
{
normal = gl_NormalMatrix * gl_Normal;
vec3 vVertex = vec3(gl_ModelViewMatrix * gl_Vertex);
lightDir = vec3(gl_LightSource[0].position.xyz - vVertex);
eyeVec = -vVertex;
gl_TexCoord[0].x = gl_MultiTexCoord0.x;
gl_TexCoord[1].x = gl_MultiTexCoord1.x;
if (yflip == 1.0) {
gl_TexCoord[0].y = 1 - gl_MultiTexCoord0.y;
gl_TexCoord[1].y = 1 - gl_MultiTexCoord1.y;
} else {
gl_TexCoord[0].y = gl_MultiTexCoord0.y;
gl_TexCoord[1].y = gl_MultiTexCoord1.y;
}
gl_Position = ftransform();
}
fragment shader:
varying vec3 normal, lightDir, eyeVec;
uniform sampler2D tex2;
void main (void)
{
vec4 texel = texture2D(tex2, gl_TexCoord[0].st);
vec4 final_color =
(gl_FrontLightModelProduct.sceneColor * gl_FrontMaterial.ambient) +
(gl_LightSource[0].ambient * gl_FrontMaterial.ambient);
vec3 N = normalize(normal);
vec3 L = normalize(lightDir);
float lambertTerm = dot(N,L);
if(lambertTerm > 0.0)
{
final_color += gl_LightSource[0].diffuse *
gl_FrontMaterial.diffuse *
lambertTerm;
vec3 E = normalize(eyeVec);
vec3 R = reflect(-L, N);
float specular = pow( max(dot(R, E), 0.0),
gl_FrontMaterial.shininess );
final_color += gl_LightSource[0].specular *
gl_FrontMaterial.specular *
specular;
}
gl_FragColor = final_color * texel;
}
glUniform1i(tex_loc, texture_id_);
The second parameter should specify ID of a texture unit(hint: glActiveTexture sets the currently active texture unit), not ID of particular texture object.