OpenGL C++, Textures Suddenly Black After Being Working For Days - c++

Open GL 3.3
My textures suddenly became black after working for many days
Pretty much all the posts that had a similiar issue were about
incorrect or absent use of glTexParameteri or incorrect texture loading but i seem to be doing everything correctly regarding that,
the vector containing the data is 1024 bytes (16 pixels x 16 pixels x 4 bytes) so that's good,
after the issue arose i made a test texture just to make shure everything about that was right.
also saw that many posts issues were incomplete texture but here im using glTexImage2D passing the data so the texture has to be complete, also am not creating mipmaps, i disabled them for testing. Altough they were on and working before this bug.
Also im calling glGetError quite frequently and there are no errors
Here is the texture creation code:
unsigned int testTexture;
unsigned long w, h;
std::vector<byte> data;
std::vector<byte> img;
loadFile(data, "./assets/textures/blocks/brick.png");
decodePNG(img, w, h, &data[0], data.size());
glGenTextures(1, &testTexture);
glBindTexture(GL_TEXTURE_2D, testTexture);
glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA8,w,h,0,GL_RGBA, GL_UNSIGNED_BYTE,&img[0]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
data.clear();
img.clear();
And here is where i setup my Uniforms:
glUseProgram(worldShaderProgram);
glUniform1f(glGetUniformLocation(worldShaderProgram, "time"), gameTime);
glUniformMatrix4fv(glGetUniformLocation(worldShaderProgram, "MVP"), 1, GL_FALSE, &TheMatrix[0][0]);
glUniform1i(glGetUniformLocation(worldShaderProgram, "texAtlas"), testTexture);
glUniform1f(glGetUniformLocation(worldShaderProgram, "texMult"), 16.0f / 256.0f);
glUniform4f(glGetUniformLocation(worldShaderProgram, "fogColor"), fogColor.r, fogColor.g, fogColor.b, fogColor.a);
Also Here Is The Fragment Shader
#version 330
in vec4 tex_color;
in vec2 tex_coord;
layout(location = 0) out vec4 color;
uniform sampler2D texAtlas;
uniform mat4 MVP;
uniform vec4 fogColor;
const float fogStart = 0.999f;
const float fogEnd = 0.9991f;
const float fogMult = 1.0f / (fogEnd - fogStart);
void main() {
if (gl_FragCoord.z >= fogEnd)
discard;
//color = vec4(tex_coord.x,tex_coord.y,0.0f,1.0f) * tex_color; // This Line Does What Its Supposed To
color = texture(texAtlas,tex_coord) * tex_color; // This One Does Not
if (gl_FragCoord.z >= fogStart)
color = mix(color,fogColor,(gl_FragCoord.z - fogStart) * fogMult);
}
If i use this line color = vec4(tex_coord.x,tex_coord.y,0.0f,1.0f) * tex_color;
Instead of this line color = texture(texAtlas,tex_coord) * tex_color;
To show the coord from witch it would be getting its color from the texture, the result is what you would expect: (Currenlty only testing it with the top faces)
Image Link Cause I Cant Do Images But Please Click
That Proves That The Vertex Shader Is Working Corretly
(The sampler2D is obtained from a uniform at the fragment shader)
Main Loop Rendering Code
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindTexture(GL_TEXTURE_2D, textures.textureId);
glUseProgram(worldShaderProgram);
wm.render();
// wm.render() calls lots of meshes to render themselves
// just wanted to point out each one of them has their own
// vertex arry buffer, vertex buffer, and index buffer
// to render i bind the vertex array buffer with glBindVertexArray(vertexArrayBuffer);
// then i call glDrawElements();
Also here is the OpenGL Initialization Code
if (!glfwInit()) // Initialize the library
return -1;
window = glfwCreateWindow(wndSize.width, wndSize.height, "Minecraft", NULL, NULL);
if (!window)
{
glfwTerminate();
return -1;
}
glfwMakeContextCurrent(window); // Make the window's context current
glfwSetWindowSizeCallback(window,resiseEvent);
glfwSwapInterval(1);
if (glewInit() != GLEW_OK)
return -1;
glClearColor(fogColor.r, fogColor.g, fogColor.b, fogColor.a);
glClearDepth(1.0f);
glEnable(GL_DEPTH_TEST); // Enable depth testing for z-culling
glEnable(GL_CULL_FACE); // Orientation Culling
glDepthFunc(GL_LEQUAL); // Set the type of depth-test (<=)
glShadeModel(GL_SMOOTH); // Enable smooth shading
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); // Nice perspective corrections
glLineWidth(2.0f);

You wrongly set the texture object to the texture sampler uniform. This is wrong:
glUniform1i(glGetUniformLocation(worldShaderProgram, "texAtlas"), testTexture);
The binding point between the texture object and the texture sampler uniform is the texture unit. When glBindTexture is invoked, then the texture object is bound to the specified target and the current texture unit. The texture unit can be chosen by glActivTexture. The default texture unit is GL_TEXTURE0.
Since your texture is bound to texture unit 0 (GL_TEXTURE0), you have set the value 0 to the texture sampler uniform:
glUniform1i(glGetUniformLocation(worldShaderProgram, "texAtlas"), 0);
Note that your code worked before by chance. You just had 1 texture object or testTexture was the first texture name created. Hence the value of testTexture was 0. Now the value of testTexture is no longer 0, causing your code to fail.

Related

Problem at Shadows Calculation with ShadowMap Rendering

I'm having a little trouble on implementing Shadow Mapping in the Engine I'm doing. I'm following the LearnOpenGL's tutorial to do so, and more or less it "works", but there's something I'm doing wrong, like if something in the shadowmap was reverted or something, check the next gifs: gif1, gif2
In those gifs, there is a simple scene with a directional light (which has an orthogonal frustum to make the shadow calculations and to ease my life), which has to cast shadows. Then, at the right there is a little window showing the "shadow map scene", the scene rendered from light's point of view only with depth values.
Now, about the code, it pretty follows the guidelines from the mentioned tutorial. I have a ModuleRenderer and I first create the framebuffers with the textures they have to have:
glGenTextures(1, &depthMapTexture);
glBindTexture(GL_TEXTURE_2D, depthMapTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, App->window->GetWindowWidth(), App->window->GetWindowHeight(), 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);
glGenFramebuffers(1, &depthbufferFBO);
glBindFramebuffer(GL_FRAMEBUFFER, depthbufferFBO);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depthMapTexture, 0);
glDrawBuffer(GL_NONE);
glReadBuffer(GL_NONE);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindTexture(GL_TEXTURE_2D, 0);
Then, in the Module Renderer's post update, I make the 2 render passes and draw the FBOs:
// --- Shadows Buffer (Render 1st Pass) ---
glBindFramebuffer(GL_FRAMEBUFFER, depthbufferFBO);
SendShaderUniforms(shadowsShader->ID, true);
DrawRenderMeshes(true);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
// --- Standard Buffer (Render 2nd Pass) ---
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
SendShaderUniforms(defaultShader->ID, false);
DrawRenderMeshes(false);
// --- Draw Lights ---
std::vector<ComponentLight*>::iterator LightIterator = m_LightsVec.begin();
for (; LightIterator != m_LightsVec.end(); ++LightIterator)
(*LightIterator)->Draw();
glBindFramebuffer(GL_FRAMEBUFFER, 0);
// -- Draw framebuffer textures ---
DrawFramebuffer(depth_quadVAO, depthMapTexture, true);
DrawFramebuffer(quadVAO, rendertexture, false);
The DrawRenderMeshes() function, basically gets the list of meshes to draw, the shader it has to pick, and sends all needed uniforms. It's a huge function to put here, but for a normal mesh, it gets a shader called Standard and sends all it needs. For the shadow map, it sends the texture attached to the depth FBO:
glUniform1i(glGetUniformLocation(shader, "u_ShadowMap"), 4);
glActiveTexture(GL_TEXTURE0 + 4);
glBindTexture(GL_TEXTURE_2D, depthMapTexture);
In the standard shader, for the vertex, I just pass the uniform for lightspace (the light's frustum projection x view matrices) to calculate the fragment position in light space (the next is done in the vertex's main):
v_FragPos = vec3(u_Model * vec4(a_Position, 1.0));
v_FragPos_InLightSpace = u_LightSpace * vec4(v_FragPos, 1.0);
v_FragPos_InLightSpace.z = (1.0 - v_FragPos_InLightSpace.z);
gl_Position = u_Proj * u_View * vec4(v_FragPos, 1.0);
And for the fragment, I calculate, with that value, the fragment's shadowing (the diffuse+specular values of light are multiplied by the result of that shadowing function):
float ShadowCalculation()
{
vec3 projCoords = v_FragPos_InLightSpace.xyz / v_FragPos_InLightSpace.w;
projCoords = projCoords * 0.5 + 0.5;
float closeDepth = texture(u_ShadowMap, projCoords.xy).z;
float currDept = projCoords.z;
float shadow = currDept > closeDepth ? 1.0 : 0.0;
return (1.0 - shadow);
}
Again, I'm not sure what can be wrong, but I can guess that something is kind of inverted? Not sure... If anyone can imagine something and let me know, I would appreciate a lot, thanks you :)
Note: For the first render pass, in which all scene is rendered only with depth values, I use a very simple shader that just puts objects in their position with the common function (in the vertex shader):
gl_Position = u_Proj * u_View * u_Model * vec4(a_Position, 1.0);
And the fragment doesn't do anything, is an empty main(), since it's the same than doing what we want for shadows pass
gl_FragDepth = gl_FragCoord.z;

Can't load multiple texture on OpenGL

I'm trying to load multiple textures in openGL.
To validate this I want to load 2 textures and mix them with the following fragment shader:
#version 330 core
out vec4 color;
in vec2 v_TexCoord;
uniform sampler2D u_Texture0;
uniform sampler2D u_Texture1;
void main()
{
color = mix(texture(u_Texture0, v_TexCoord), texture(u_Texture1, v_TexCoord), 0.5);
}
I'have abstract couple of OpenGL's functionality into classes like Shader, Texture UniformXX etc..
Here's an attempt to load the 2 textures into the sampler units of the fragment:
Shader shader;
shader.Attach(GL_VERTEX_SHADER, "res/shaders/vs1.shader");
shader.Attach(GL_FRAGMENT_SHADER, "res/shaders/fs1.shader");
shader.Link();
shader.Bind();
Texture texture0("res/textures/container.jpg", GL_RGB, GL_RGB);
texture0.Bind(0);
Uniform1i textureUnit0Uniform("u_Texture0");
textureUnit0Uniform.SetValues({ 0 });
shader.SetUniform(textureUnit0Uniform);
Texture texture1("res/textures/awesomeface.png", GL_RGBA, GL_RGBA);
texture1.Bind(1);
Uniform1i textureUnit1Uniform("u_Texture1");
textureUnit1Uniform.SetValues({ 1 });
shader.SetUniform(textureUnit1Uniform);
Here's what the Texture implementation looks like:
#include "Texture.h"
#include "Renderer.h"
#include "stb_image/stb_image.h"
Texture::Texture(const std::string& path, unsigned int destinationFormat, unsigned int sourceFormat)
: m_Path(path)
{
stbi_set_flip_vertically_on_load(1);
m_Buffer = stbi_load(path.c_str(), &m_Width, &m_Height, &m_BPP, 0);
GLCALL(glGenTextures(1, &m_RendererID));
GLCALL(glBindTexture(GL_TEXTURE_2D, m_RendererID));
GLCALL(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST));
GLCALL(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR));
GLCALL(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT));
GLCALL(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT));
GLCALL(glTexImage2D(GL_TEXTURE_2D, 0, destinationFormat, m_Width, m_Height, 0, sourceFormat, GL_UNSIGNED_BYTE, m_Buffer));
glGenerateMipmap(GL_TEXTURE_2D);
GLCALL(glBindTexture(GL_TEXTURE_2D, 0));
if (m_Buffer)
stbi_image_free(m_Buffer);
}
Texture::~Texture()
{
GLCALL(glDeleteTextures(1, &m_RendererID));
}
void Texture::Bind(unsigned int unit) const
{
GLCALL(glActiveTexture(GL_TEXTURE0 + unit));
GLCALL(glBindTexture(GL_TEXTURE_2D, m_RendererID));
}
void Texture::Unbind() const
{
GLCALL(glBindTexture(GL_TEXTURE_2D, 0));
}
Now instead of actually getting an even mix of color from both textures I only get the second texture appearing and blending with the background:
I've pinpointed the problem to the constructor of the Texture implementation, if I comment out the initialization of the second texture such as that its constructor is never being called then I can get the first texture to show up.
Can anyone suggest what I'm doing wrong?
Took me a while to spot, but at the point where you call the constructor of the second texture, your active texture unit is still 0, so the constructor happily repoints your texture unit and you are left with two texture units bound to the same texture.
The solution should be simple enough: do not interleave texture creation and texture unit assignment, by creating the textures first and only then binding them explicitly.
Better yet, look into using direct state access to avoid all this binding.
To highlight the problem for future viewers of this question, this is the problematic sequence of calls:
// constructor of texture 1
glGenTextures(1, &container)
glBindTexture(GL_TEXTURE_2D, container) // Texture Unit 0 is now bound to container
// explicit texture0.Bind call
glActiveTexture(GL_TEXTURE0) // noop
glBindTexture(GL_TEXTURE_2D, container) // Texture Unit 0 is now bound to container
// constructor of texture 2
glGenTextures(1, &awesomeface)
glBindTexture(GL_TEXTURE_2D, awesomeface) // Texture Unit 0 is now bound to awesomeface instead of container.
// explicit texture1.Bind call
glActiveTexture(GL_TEXTURE1)
glBindTexture(GL_TEXTURE_2D, awesomeface) // Texture Unit 0 and 1 are now bound to awesomeface.

GL_INVALID_OPERATION when attempting to sample cubemap texture

I'm working on shadow casting using this lovely tutorial. The process is, we render the scene to a frame buffer, attached to which is a cubemap to hold the depth values. Then, we pass this cubemap to a fragment shader which samples it and gets the depth values from there.
I took a slight deviation from the tutorial in that instead of using a geometry shader to render the entire cubemap at once, I instead render the scene six times to get the same effect - largely because my current shader system doesn't support geometry shaders and for now I'm not too concerned about the performance hit.
The depth cubemap is being drawn to fine, here's a screenshot from gDEBugger:
Everything seems to be in order here.
However, I'm having issues in my fragment shader when I attempt to sample this cubemap. After the call to glDrawArrays, a call to glGetError returns GL_INVALID_OPERATION, and as best as I can tell, it's coming from here: (The offending line has been commented)
struct PointLight
{
vec3 Position;
float ConstantRolloff;
float LinearRolloff;
float QuadraticRolloff;
vec4 Color;
samplerCube DepthMap;
float FarPlane;
};
uniform PointLight PointLights[NUM_POINT_LIGHTS];
[...]
float CalculateShadow(int lindex)
{
// Calculate vector between fragment and light
vec3 fragToLight = FragPos - PointLights[lindex].Position;
// Sample from the depth map (Comment this out and everything works fine!)
float closestDepth = texture(PointLights[lindex].DepthMap, vec3(1.0, 1.0, 1.0)).r;
// Transform to original value
closestDepth *= PointLights[lindex].FarPlane;
// Get current depth
float currDepth = length(fragToLight);
// Test for shadow
float bias = 0.05;
float shadow = currDepth - bias > closestDepth ? 1.0 : 0.0;
return shadow;
}
Commenting out the aforementioned line seems to make everything work fine - so I'm assuming it's the call to the texture sampler that's causing issues. I saw that this can be attributed to using two textures of different types in the same texture unit - but according to gDEBugger this isn't the case:
Texture 16 is the depth cube map.
In case it's relevant, here's how I'm setting up the FBO: (called only once)
// Generate frame buffer
glGenFramebuffers(1, &depthMapFrameBuffer);
// Generate depth maps
glGenTextures(1, &depthMap);
// Set up textures
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_CUBE_MAP, depthMap);
for (int i = 0; i < 6; ++i)
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, GL_DEPTH_COMPONENT,
ShadowmapSize, ShadowmapSize, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
// Set texture parameters
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
// Attach cubemap to FBO
glBindFramebuffer(GL_FRAMEBUFFER, depthMapFrameBuffer);
glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthMap, 0);
glDrawBuffer(GL_NONE);
glReadBuffer(GL_NONE);
if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
ERROR_LOG("PointLight created an incomplete frame buffer!\n");
glBindTexture(GL_TEXTURE_CUBE_MAP, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
Here's how I'm drawing with it: (called every frame)
// Set up viewport
glViewport(0, 0, ShadowmapSize, ShadowmapSize);
// Bind frame buffer
glBindFramebuffer(GL_FRAMEBUFFER, depthMapFrameBuffer);
// Clear depth buffer
glClear(GL_DEPTH_BUFFER_BIT);
// Render scene
for(int i = 0; i < 6; ++i)
{
sh->SetUniform("ShadowMatrix", lightSpaceTransforms[i]);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,
GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, depthMap, 0);
Space()->Get<Renderer>()->RenderScene(sh);
}
// Unbind frame buffer
glBindFramebuffer(GL_FRAMEBUFFER, 0);
And here's how I'm binding it before drawing:
std::stringstream ssD;
ssD << "PointLights[" << i << "].DepthMap";
glActiveTexture(GL_TEXTURE4 + i);
glBindTexture(GL_TEXTURE_CUBE_MAP, pointlights[i]->DepthMap()); // just returns the ID of the light's depth map
shader->SetUniform(ssD.str().c_str(), i + 4); // just a wrapper around glSetUniform1i
Thank you for reading, and please let me know if I can supply more information!
It is old post, but i think it may be useful for other people from the search.
Your problem here:
glActiveTexture(GL_TEXTURE4 + i);
glBindTexture(GL_TEXTURE_CUBE_MAP, pointlights[i]->DepthMap());
This replacement should fix problem:
glActiveTexture(GL_TEXTURE4 + i);
glUniform1i(glGetUniformLocation("programId", "cubMapUniformName"), GL_TEXTURE4 + i);
glBindTexture(GL_TEXTURE_CUBE_MAP, pointlights[i]->DepthMap());
It set texture unit number for shader sampler

OpenGL Shadow Glitch

The Problem
I have been trying to implement shadows in OpenGL for some time. I have finally gotten it to a semi-working state in that the shadow appears but covers the scene in strange places [i.e - it is not relative to the light]
To further explain the above gif: As I move the light-source further away from the scene (to the left) - the shadow stretches further. Why? If anything, it should show more of the scene.
Update - I messed around with the lights position and am now being given this result (confusing):
Depth Map
Here it is:
The Code
Because this is a difficult issue to pinpoint - I will post a large chunk of the code I am using in this application.
The Framebuffer and Depth Texture - The first thing I needed was a framebuffer to record the depth values of all the drawn objects and then I needed to dump these values into a depth texture (the shadow-map):
// Create Framebuffer
FramebufferName = 0;
glGenFramebuffers(1, &FramebufferName);
glBindFramebuffer(GL_FRAMEBUFFER, FramebufferName);
// Create and Load Depth Texture
glGenTextures(1, &depthTexture);
glBindTexture(GL_TEXTURE_2D, depthTexture);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_REF_TO_TEXTURE);
glTexImage2D(GL_TEXTURE_2D, 0,GL_DEPTH_COMPONENT24, 1024, 1024, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, 0);
glBindTexture(GL_TEXTURE_2D, 0);
//Attach Texture To Framebuffer
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depthTexture, 0);
glDrawBuffer(GL_NONE);
glReadBuffer(GL_NONE);
//Check for errors
if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
Falcon::Debug::error("ShadowBuffer [Framebuffer] could not be initialized.");
Rendering The Scene - First I do the shadow-pass which just runs through some basic shaders and outputs to the framebuffer and then I do a second, regular pass that actually draws the scene and does GLSL shadow-map sampling:
//Clear
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//Select Main Shader
normalShader->useShader();
//Bind + Update + Draw
/* Render Shadows */
shadowShader->useShader();
glBindFramebuffer(GL_FRAMEBUFFER, Shadows::framebuffer());
//Viewport
glViewport(0,0,640,480);
//GLM Matrix Definitions
glm::mat4 shadow_matrix_view;
glm::mat4 shadow_matrix_projection;
//View And Projection Calculations
shadow_matrix_view = glm::lookAt(glm::vec3(light.x,light.y,light.z), glm::vec3(0,0,0), glm::vec3(0,1,0));
shadow_matrix_projection = glm::perspective(45.0f, 1.0f, 0.1f, 1000.0f);
//Calculate MVP(s)
glm::mat4 shadow_depth_mvp = shadow_matrix_projection * shadow_matrix_view * glm::mat4(1.0);
glm::mat4 shadow_depth_bias = glm::mat4(0.5,0,0,0,0,0.5,0,0,0,0,0.5,0,0.5,0.5,0.5,1) * shadow_depth_mvp;
//Send Data To The GPU
glUniformMatrix4fv(glGetUniformLocation(shadowShader->getShader(),"depth_matrix"), 1, GL_FALSE, &shadow_depth_mvp[0][0]);
glUniformMatrix4fv(glGetUniformLocation(normalShader->getShader(),"depth_matrix_bias"), 1, GL_FALSE, &shadow_depth_bias[0][0]);
renderScene();
glBindFramebuffer(GL_FRAMEBUFFER, 0);
/* Clear */
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
/* Shader */
normalShader->useShader();
/* Shadow-map */
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, Shadows::shadowmap());
glUniform1f(glGetUniformLocation(normalShader->getShader(),"shadowMap"),0);
/* Render Scene */
glViewport(0,0,640,480);
renderScene();
Fragment Shader - This is where I calculate the final color to be output and do the depth texture / shadow-map sampling. It could be the source of where I am going wrong:
//Shadows
uniform sampler2DShadow shadowMap;
in vec4 shadowCoord;
void main()
{
//Lighting Calculations...
//Shadow Sampling:
float visibility = 1.0;
if (texture(shadowMap, shadowCoord.xyz) < shadowCoord.z){
visibility = 0.1;
}
//Final Output
outColor = finalColor * visibility;
}
Edits
<1> AMD Hardware Issue - It was also suggested that this could be an issue of the GPU but I find this hard to believe given that it's a Radeon HD 6670. Would it be worth putting in a Nvidia card in to test this theory?
<2> Suggest Changes - I made some suggested changes from the comments and answers:
Firstly, I changed the light's perspective projection to an ortho one which gave me the accuracy I needed in the shadow-map so that now I can see the depth clearly (i.e -> it's not all white). In addition, it removes the need for the perspective division so I am using 3-dimensional coordinates for testing this. Below is a screenshot:
Secondly, I changed my texture sampling to this: visibility = texture(shadowMap,shadowCoord.xyz); which now always returns 0 and thus I cannot see the scene as it is considered ENTIRELY shadowed.
Thirdly and finally, I made a swap from GL_LEQUAL to GL_LESS as suggested an no changes occurred.
There is something fundamentally wrong with your shader:
uniform sampler2DShadow shadowMap; // NOTE: Shadow samplers perform comparison !!
...
if (texture(shadowMap, shadowCoord.xyz) < shadowCoord.z)
You have texture compare vs. reference enabled. That means that the 3rd texture coordinate is going to be compared by the texture (...) function and the returned value is going to be the result of the test function (GL_LEQUAL in this case).
In other words, texture (...) will return either 0.0 (fail) or 1.0 (pass) by comparing the looked up depth at shadowCoord.xy to the value of shadowCoord.z. You are doing this test twice.
Consider using this altered code instead:
float visibility = texture(shadowMap, shadowCoord.xyz);
That is not going to produce quite the results you want because your comparison function is GL_LEQUAL, but it is a start. Consider changing the comparison function to GL_LESS to get an exact functional match.

OpenGL, Shader Model 3.3 Texturing: Black Textures?

I've been banging my head against this for hours now, I'm sure it's something simple, but I just can't get a result. I've had to edit this code down a bit because I've built a little library to encapsulate the OpenGL calls, but the following is an accurate description of the state of affairs.
I'm using the following vertex shader:
#version 330
in vec4 position;
in vec2 uv;
out vec2 varying_uv;
void main(void)
{
gl_Position = position;
varying_uv = uv;
}
And the following fragment shader:
#version 330
in vec2 varying_uv;
uniform sampler2D base_texture;
out vec4 fragment_colour;
void main(void)
{
fragment_colour = texture2D(base_texture, varying_uv);
}
Both shaders compile and the program links without issue.
In my init section, I load a single texture like so:
// Check for errors.
kt::kits::open_gl::Core<QString>::throw_on_error();
// Load an image.
QImage image("G:/test_image.png");
image = image.convertToFormat(QImage::Format_RGB888);
if(!image.isNull())
{
// Load up a single texture.
glGenTextures(1, &Texture);
glBindTexture(GL_TEXTURE_2D, Texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, image.width(), image.height(), 0, GL_RGB, GL_UNSIGNED_BYTE, image.constBits());
glBindTexture(GL_TEXTURE_2D, 0);
}
// Check for errors.
kt::kits::open_gl::Core<QString>::throw_on_error();
You'll observe that I'm using Qt to load the texture. The calls to ::throw_on_error() check for errors in OpenGL (by calling Error()), and throw an exception if one occurs. No OpenGL errors occur in this code, and the image loaded using Qt is valid.
Drawing is performed as follows:
// Clear previous.
glClear(GL_COLOR_BUFFER_BIT |
GL_DEPTH_BUFFER_BIT |
GL_STENCIL_BUFFER_BIT);
// Use our program.
glUseProgram(GLProgram);
// Bind the vertex array.
glBindVertexArray(GLVertexArray);
/* ------------------ Setting active texture here ------------------- */
// Tell the shader which textures are which.
kt::kits::open_gl::gl_int tAddr = glGetUniformLocation(GLProgram, "base_texture");
glUniform1i(tAddr, 0);
// Activate the texture Texture(0) as texture 0.
glActiveTexture(GL_TEXTURE0 + 0);
glBindTexture(GL_TEXTURE_2D, Texture);
/* ------------------------------------------------------------------ */
// Draw vertex array as triangles.
glDrawArrays(GL_TRIANGLES, 0, 4);
glBindVertexArray(0);
glUseProgram(0);
// Detect errors.
kt::kits::open_gl::Core<QString>::throw_on_error();
Similarly, no OpenGL errors occur, and a triangle is drawn to screeen. However, it looks like this:
It occurred to me the problem may be related to my texture coordinates. So, I rendered the following image using s as the 'red' component, and t as the 'green' component:
The texture coordinates appear correct, yet I'm still receiving the black triangle of doom. What am I doing wrong?
I think it could be depending on an incomplete init of your texture object.
Try to init the texture MIN and MAG filter
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
Moreover, I would suggest to check the size of the texture. If it is not power of 2, then you have to set the wrapping mode to CLAMP_TO_EDGE
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
Black textures are often due to this issue, very common problem around.
Ciao
In your fragment shader you're writing to a self defined target
fragment_colour = texture2D(base_texture, varying_uv);
If that's not to be gl_FragColor or gl_FragData[…], did you properly set the designated fragment data location?