Deferred Rendering not Displaying the GBuffer Textures - c++

I'm trying to implement deferred rendering within an engine I'm developing as a personal learning, and I cannot get to understand what I'm doing wrong when it comes to render all the textures in the GBuffer to check if the implementation is okay.
The thing is that I currently have a framebuffer with 3 color attachments for the different textures of the GBuffer (color, normal and position), which I initialize as follows:
glCreateFramebuffers(1, &id);
glBindFramebuffer(GL_FRAMEBUFFER, id);
std::vector<uint> textures;
textures.resize(3);
glCreateTextures(GL_TEXTURE_2D, 3, textures.data());
for(size_t i = 0; i < 3; ++i)
{
glBindTexture(GL_TEXTURE_2D, textures[i]);
if(i == 0) // For Color Buffer
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, nullptr);
else
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, width, height, 0, GL_RGBA, GL_FLOAT, nullptr);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, GL_TEXTURE_2D, textures[i], 0);
}
GLenum color_buffers[3] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2 };
glDrawBuffers((GLsizei)textures.size(), color_buffers);
uint depth_texture;
glCreateTextures(GL_TEXTURE_2D, 1, &depth_texture);
glBindTexture(GL_TEXTURE_2D, depth_texture);
glTexStorage2D(GL_TEXTURE_2D, 1, GL_DEPTH24_STENCIL8, width, height);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_TEXTURE_2D, depth_texture, 0);
bool fbo_status = glCheckFramebufferStatus(GL_FRAMEBUFFER) == GL_FRAMEBUFFER_COMPLETE;
ASSERT(fbo_status, "Framebuffer Incompleted!");
glBindFramebuffer(GL_FRAMEBUFFER, 0);
This is not reporting any errors and it seems to work since the framebuffer of the forward renderer renders properly. Then, when rendering, I run the next code after binding the framebuffer and clearing the color and depth buffers:
camera_buffer->Bind();
camera_buffer->SetData("ViewProjection", glm::value_ptr(viewproj_mat));
camera_buffer->SetData("CamPosition", glm::value_ptr(glm::vec4(view_position, 0.0f)));
camera_buffer->Unbind();
for(Entity& entity : scene_entities)
{
shader->Bind();
Texture* texture = entity.GetTexture();
BindTexture(0, texture);
shader->SetUniformMat4("u_Model", entity.transform);
shader->SetUniformInt("u_Albedo", 0);
shader->SetUniformVec4("u_Material.AlbedoColor", entity->AlbedoColor);
shader->SetUniformFloat("u_Material.Smoothness", entity->Smoothness);
glBindVertexArray(entity.VertexArray);
glDrawElements(GL_TRIANGLES, entity.VertexArray.index_buffer.count, GL_UNSIGNED_INT, nullptr);
// Shader, VArray and Textures Unbindings
}
So with this code I manage to render the 3 textures created by using the ImGui::Image function, by switching the texture index between 0, 1 or 2 as the next:
ImGui::Image((ImTextureID)(fbo->textures[0]), viewport_size, ImVec2(0, 1), ImVec2(1, 0));
Now, the color texture (at index 0) works perfectly, as the next image shows:
But when rendering the normals and position textures (indexes 2 and 3), I have no result:
Does anybody sees what I'm doing wrong? Because I've been hours and hours with this and I cannot see it. I ran this on RenderDoc and I couldn't see anything wrong, the textures displayed in RenderDoc are the same than in the engine.
The vertex shader I use when rendering the entities is the next:
layout(location = 0) in vec3 a_Position;
layout(location = 1) in vec2 a_TexCoord;
layout(location = 2) in vec3 a_Normal;
out IBlock
{
vec2 TexCoord;
vec3 FragPos;
vec3 Normal;
} v_VertexData;
layout(std140, binding = 0) uniform ub_CameraData
{
mat4 ViewProjection;
vec3 CamPosition;
};
uniform mat4 u_ViewProjection = mat4(1.0);
uniform mat4 u_Model = mat4(1.0);
void main()
{
vec4 world_pos = u_Model * vec4(a_Position, 1.0);
v_VertexData.TexCoord = a_TexCoord;
v_VertexData.FragPos = world_pos.xyz;
v_VertexData.Normal = transpose(inverse(mat3(u_Model))) * a_Normal;
gl_Position = ViewProjection * u_Model * vec4(a_Position, 1.0);
}
And the fragment one is the next, they are both pretty simple:
layout(location = 0) out vec4 gBuff_Color;
layout(location = 1) out vec3 gBuff_Normal;
layout(location = 2) out vec3 gBuff_Position;
in IBlock
{
vec2 TexCoord;
vec3 FragPos;
vec3 Normal;
} v_VertexData;
struct Material
{
float Smoothness;
vec4 AlbedoColor;
};
uniform Material u_Material = Material(1.0, vec4(1.0));
uniform sampler2D u_Albedo, u_Normal;
void main()
{
gBuff_Color = texture(u_Albedo, v_VertexData.TexCoord) * u_Material.AlbedoColor;
gBuff_Normal = normalize(v_VertexData.Normal);
gBuff_Position = v_VertexData.FragPos;
}

It is not clear from the question what exactly might be happening here, as lots of GL states - both at the time the rendering to the gbuffer, and at that time the gbuffer texture is rendered for visualization - are just unknown. However, from the images given in the question, one can not conclude that the actual color output for attachments 1 and 2 is not working.
One issue which comes to mind is alpha blending. The color values processed by the per-fragment operations after the vertex shader are always working with RGBA values - although the value of the A channel only matters if you enabled blending and use a blend function which somehow depends on the source alpha.
If you declare a custom fragment shader output as float, vec2, vec3, the remaining components stay undefined (undefined value, not undefined behavior). This does not impose a problem unless some other operations you do depend on those values.
What we also have here is a GL_RGBA16F output format (which is the right choice, because none of the 3-component RGB formats are required as color-renderable by the spec).
What might happen here is either:
Alpha blending is already turned on during rendering into the g-buffer. The fragment shader's alpha output happens to be zero, so that it appears as 100% transparent and the contents of the texture are not changed.
Alpha blending is not used during rendering into the g-buffer, so the correct contents end up in the texture, the alpha channel just happens to end up with all zeros. Now the texture might be visualized with alpha blending enbaled, ending up in a 100% transparent view.
If it is the first option, turn off blending when rendering the into the g-buffer. It would not work with deferred shading anyway. You might still run into the second option then.
If this is the second option, there is no issue at all - the lighting passes which follow will read the data they need (and ultimately, you will want to put useful information into the alpha channel to not waste it and be able to reduce the number of attachments). It is just your visualization (which I assume is for debug purposed only) is wrong. You can try to fix the visualization.
As a side note: Storing the world space position in the G-Buffer is a huge waste of bandwidth. All you need to be able to reconstruct the world space position is the depth value and the inverse of your view and projection matrices. Also storing world space position in GL_RGB16F will very easily run into precision issues if you move your camera away from world space origin.

Related

Texture gets over written when using multiple textures in GLSL shader

I am working on sending multiple textures to a single shader and am having a weird issue where both samplers in the shader seem to get the same texture data. I know there are a lot of other multiple texture questions and answers out there (Here are a few I've read multiple times already 1, 2, 3) but some bug is eluding me and I'm starting to lose my marbles. I am fairly confident I have everything setup correctly but obviously there is still some issue.
So, currently I have Shape, Material, Texture, and Shader classes. My shape class is the parent that performs the actual draw. It has a material member which has a shader and an array of textures. The material class draw looks like this:
void Shape::Draw(GLenum mode, glm::mat4& model, glm::mat4& view, glm::mat4& proj)
{
m_Material.Enable();
m_Material.UpdateTransform(model, view, proj);
glBindVertexArray(m_VAO);
glDrawElements(mode, m_NumVerts, GL_UNSIGNED_INT, 0);
m_Material.Disable();
}
Here is my whole material class:
#include "pch.h"
#include "Material.h"
Material::Material() :
m_LightService(LightService::GetInstance())
{
OGLR_CORE_INFO("CREATING MATERIAL");
}
void Material::SetShader(std::string fileName)
{
m_Shader.SetShaderFileName(fileName);
}
void Material::Enable() {
m_Shader.Bind();
for (const auto text : m_Textures) {
text->Enable();
}
UploadUniforms();
}
void Material::Disable() {
m_Shader.Unbind();
for (const auto text : m_Textures) {
text->Disable();
}
}
void Material::AddTexture(std::string fileName, std::string typeName) {
m_Textures.push_back(std::make_shared<Texture>(fileName, m_Shader.ShaderId(), typeName, m_Textures.size()));
}
void Material::UpdateTransform(glm::mat4& model, glm::mat4& view, glm::mat4& proj) {
m_Shader.UploadUniformMat4("u_Projection", proj);
m_Shader.UploadUniformMat4("u_View", view);
m_Shader.UploadUniformMat4("u_Model", model);
}
void Material::UploadUniforms() {
if (m_Shader.isLoaded()) {
auto ambient = m_LightService->GetAmbientLight();
m_Shader.UploadUniformFloat3("uAmbientLight", ambient.strength * ambient.color);
}
}
void Material::SetMaterialData(std::shared_ptr<MaterialData> matData) {
AddTexture(matData->ambient_texname, "t_Ambient"); // Wall
AddTexture(matData->diffuse_texname, "t_Diffuse"); // Farm
}
As you can see, when the material receives the material data from the .mtl file of the .obj we are rendering in the Material::SetMaterialData function, we are adding two new texture objects to the list of textures. We are passing in the filename to be loaded and the string identifier of the glsl uniform sampler.
When the material is enabled, we are enabling the shader and each of the texture objects.
Here is the wip of my Texture class.
#include "pch.h"
#include "Texture.h"
#include <stb_image.h>
Texture::Texture(std::string fileName, uint32_t programId, std::string uniformId, uint16_t unitId)
{
m_FileName = ASSET_FOLDER + fileName;
unsigned char* texData = stbi_load(m_FileName.c_str(), &m_Width, &m_Height, &m_NrChannels, 0);
m_ProgramId = programId;
glUniform1i(glGetUniformLocation(programId, uniformId.c_str()), unitId);
glGenTextures(1, &m_TextureId);
m_TextureUnit = GL_TEXTURE0 + unitId;
glActiveTexture(m_TextureUnit);
glBindTexture(GL_TEXTURE_2D, m_TextureId);
if (texData)
{
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, m_Width, m_Height, 0, GL_RGB, GL_UNSIGNED_BYTE, texData);
glGenerateMipmap(GL_TEXTURE_2D);
}
else
{
OGLR_CORE_ERROR("Failed to load texture");
throw std::runtime_error("Failed to load texture: "+ m_FileName);
}
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
stbi_image_free(texData);
Disable();
}
void Texture::Enable()
{
glActiveTexture(m_TextureUnit); // activate the texture unit first before binding texture
glBindTexture(GL_TEXTURE_2D, m_TextureId);
}
void Texture::Disable()
{
glBindTexture(GL_TEXTURE_2D, 0);
}
So, the first thing I do is grab the ID of the sampler uniform from the shader and bind that sample to the texture unit I'm looking for. We then generate that texture, activate the same unit and bind my generated texture to it. I'm guessing that it is somewhere in here that I have blundered but I can't seem to figure out how.
Here are my shaders as they currently stand.
// vertex
#version 330 core
layout (location = 0) in vec3 a_Position;
layout (location = 1) in vec3 aNormal;
layout (location = 2) in vec2 aTexCoord;
out vec3 outNormal;
out vec2 TexCoord;
uniform mat4 u_Projection;
uniform mat4 u_View;
uniform mat4 u_Model;
void main() {
vec4 worldPosition = u_Model * vec4(a_Position,1.0);
gl_Position = u_Projection * u_View * worldPosition;
outNormal = aNormal;
TexCoord = aTexCoord;
}
// fragment
#version 330 core
out vec4 color;
in vec3 outNormal;
in vec2 TexCoord;
uniform sampler2D t_Ambient;
uniform sampler2D t_Diffuse;
void main() {
if (TexCoord.x > 0.50)
{
//color = vec4(TexCoord.x, TexCoord.y, 0.0, 1.0);
color = texture(t_Diffuse, TexCoord);
}
else
{
color = texture(t_Ambient, TexCoord);
}
}
I am expecting each half of my triangle to have different textures but for some reason both samplers seem to get the same texture. If I use that color in the frag shader instead of the texture I get half texture and half the color so it... that at least works...
The other thing that I noticed that I thought was weird was that the texture that gets rendered seems to always be the first one I add. If I flip the order of the AddTexture calls in Material::SetMaterialData the other texture appears. Maybe someone can explain to my why that would be obvious but I would have expected that if I had somehow goofed the binding of my textures that it would be the second one overwriting the first but hey ¯_(ツ)_/¯ I'm ready to be educated on that one.
Edit
I apologize but apparently it was not clear that the shader is being properly bound.
At the beginning of the Shape::Draw function we are calling m_Material.Enable();
The beginning of which calls m_Shader.Bind(); which in turn calls glUseProgram(m_ProgramId);
This occurs before any of the texture creation flow so the shader is properly bound before we are setting the uniforms.
Apologies for any confusion.
glUniform1i binds an uniform only for the currently enabled shader:
glUniform operates on the program object that was made part of current state by calling glUseProgram.
Seems like you don't call glUseProgram before glUniform1i(glGetUniformLocation(programId, uniformId.c_str()), unitId); (I can't say for sure without caller code of the SetMaterialData) and the uniform is not actually binded to the unitId for the shader.
So try this:
glUseProgram(programId);
glUniform1i(glGetUniformLocation(programId, uniformId.c_str()), unitId);
glGenTextures(1, &m_TextureId);

passing a float array as a 3D Texture to GLSL fragment shader

I'm trying to implement ray casting based volume rendering and therefore I'd need to pass a float Array to the fragment shader as a Texture (Sampler3D).
I've got a volume datastructure containing all the voxels. Each voxel contains a density value. So for processing I stored the values into a float Array.
//initialize glew, initialize glfw, create window, etc.
float* density;
density = new float[volume->size()];
for (int i = 0; i < volume->size(); i++){
density[i] = volume->voxel(i).getValue();
}
Then I tried creating and binding the textures.
glGenTextures(1, &textureHandle);
glBindTexture(GL_TEXTURE_3D, textureHandle);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_R, GL_REPEAT);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexImage3D(GL_TEXTURE_3D, 0, GL_LUMINANCE, volume->width(),
volume->height(), volume->depth(), 0, GL_LUMINANCE, GL_FLOAT, density);
In my render loop I try to load the Texture to the uniform Sampler3D.
glClearColor(0.4f, 0.2f, 0.3f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glActiveTexture(GL_TEXTURE0);
GLint gSampler = glGetUniformLocation(shader->shaderProgram, "volume");
glUniform1i(gSampler, 0);
cube->draw();
So the basic idea is to calculate the current position and direction for ray casting in the Vertex Shader.
in vec3 position;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
uniform vec4 cameraPos;
out vec3 pos;
out vec3 dir;
void main(){
gl_Position = projection * view * model * vec4(position, 1.0);
pos = position;
dir = pos - (inverse(model) * cameraPos).xyz;
}
That seems to work well, so far so good. The fragment shader looks like this. I take some samples along the ray and the one with the largest density value will be taken as a color for red, green and blue.
#version 330 core
in vec3 pos;
in vec3 dir;
uniform sampler3D volume;
out vec4 color;
const float stepSize = 0.008;
const float iterations = 1000;
void main(){
vec3 rayDir = normalize(dir);
vec3 rayPos = pos;
float src;
float dst = 0;
float density = 0;
for(int i = 0; i < iterations; i++){
src = texture(volume, rayPos).r;
if(src > density){
density = src;
}
rayPos += rayDir * stepSize;
//check whether rays are within bounds. if not -> break.
}
color = vec4(density, density, density, 1.0f);
}
Now I've tried inserting some small debug assertions.
if(src != 0){
rayPos = vec3(1.0f);
break;
}
But src seems to be 0 at every iteration of every pixel. Which gets me to the conclusion that the Sampler isn't correctly set. Debugging the C++ code I get the correct values for the density array right before I pass it to the shader, so I guess there must be some opengl function missing. Thanks in advance!
glTexImage3D(GL_TEXTURE_3D, 0, GL_LUMINANCE, volume->width(), volume->height(), volume->depth(), 0, GL_LUMINANCE, GL_FLOAT, density);
Unless this density is on the range [0, 1], then this is almost certainly not doing what you intend.
GL_LUMINANCE, when used as an internal format (the third parameter to glTexImage3D, means that each pixel in OpenGL's texture data will contain a single normal integer value. So if you want a floating-point value, you're kinda out of luck.
The proper way to do this is to explicitly declare the type and pixel size of the data. Luminance was removed from the core OpenGL profile back in 3.1, so the way to do that today is to use GL_R32F as your internal format. That declares that each pixel contains one value, and that value is a 32-bit float.
If you really need to broadcast the value across the RGB channels, you can use texture swizzling to accomplish that. You can set a swizzle mask to broadcast the red component to any other channel you like.
glActiveTexture(GL_TEXTURE0);
GLint gSampler = glGetUniformLocation(shader->shaderProgram, "volume");
glUniform1i(gSampler, 0);
I've heard that binding the texture is also a good idea. You know, if you actually want to read from it ;)

Cube mapping does not work correctly using OpenGL/GLSL

I have a strange behaviour with Cube Mapping technique: all my pixel shaders return the black color. So I have in result a black screen.
Here's the situation:
I have a scene only composed by a simple cube mesh (the skybox) and a track ball camera.
Now let's examine the resources (the aspect of the 6 separate textures):
Here's the image details :
So as you can see these textures need to be loaded in GL_RGB format.
Now, for the initialization part, let's take a look to the client C++ texture loading code (I use the 'DevIL' library to load my images):
glGenTextures(1, &this->m_Handle);
glBindTexture(this->m_Target, this->m_Handle);
{
glTexParameteri(this->m_Target, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(this->m_Target, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(this->m_Target, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(this->m_Target, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(this->m_Target, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
{
for (uint32_t idx = 0; idx < this->m_SourceFileList.size(); idx++)
{
ilLoadImage((const wchar_t*)this->m_SourceFileList[idx].c_str()); //IMAGE LOADING
{
uint32_t width = ilGetInteger(IL_IMAGE_WIDTH);
uint32_t height = ilGetInteger(IL_IMAGE_HEIGHT);
uint32_t bpp = ilGetInteger(IL_IMAGE_BPP);
{
char *pPixelData = (char*)ilGetData();
glTexImage2D(this->m_TargetList[idx], 0, GL_RGB8,
width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, pPixelData);
}
}
}
}
}
glBindTexture(this->m_Target, 0);
Concerning the main loop part, here's the informations I send to the shader program (texture and matrix data):
//TEXTURE DATA
scene::type::MaterialPtr pSkyBoxMaterial = pBatch->GetMaterial();
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_CUBE_MAP,
pSkyBoxMaterial->GetDiffuseTexture()->GetHandle());
this->SetUniform("CubeMapSampler", 1);
//MATRIX DATA
this->SetUniform("ModelViewProjMatrix", pBatch->GetModelViewProjMatrix());
As you can see the cube map texture in bound to texture unit 1.
Now here's the vertex shader:
#version 400
/*
** Vertex attributes.
*/
layout (location = 0) in vec3 VertexPosition;
/*
** Uniform matrix buffer.
*/
uniform mat4 ModelViewProjMatrix;
/*
** Outputs.
*/
out vec3 TexCoords;
/*
** Vertex shader entry point.
*/
void main(void)
{
TexCoords = VertexPosition;
gl_Position = ModelViewProjMatrix * vec4(VertexPosition, 1.0f);
}
And finally the fragment shader:
#version 400
/*
** Output color value.
*/
layout (location = 0) out vec4 FragColor;
/*
** Vertex inputs.
*/
in vec3 TexCoords;
/*
** Texture uniform.
*/
uniform samplerCube CubeMapSampler;
/*
** Fragment shader entry point.
*/
void main(void)
{
vec4 finalColor = texture(CubeMapSampler, TexCoords);
FragColor = finalColor;
}
So all the program compiled and executed show the following result :
But I want to precise I use the NVIDIA NSight Debugger and I want to show you first that the cube map is correctly loaded on the GPU:
As you can see as it's been written in the pieces of code above my texture is a RGB texture bound to unit texture 1 and it's a GL_TEXTURE_CUBE_MAP texture type. So until here all seems to be correct!
And if I replace in the fragment shader the line:
vec4 finalColor = texture(CubeMapSampler, TexCoords);
By the line:
vec4 finalColor = vec4(TexCoords, 1.0f);
I have the following result (I render directly the vertex coordinates in model space as color) without wireframe:
And the same result with wireframe:
Plus I want to precise that the line:
std::cout << glGetError() << std::endl;
Always returns the '0' value so I have none error!
So I think these two last pictures show that the matrix informations are correct and the vertex cordinates are correct too (moreover I use a track ball camera and when I move around into my scene I can recognize the cube architecture). So for me all the informations I recover in my shader program are correct except for the cube sampler! I think the problem comes from this sampler. However as you could see above the cube map seems to be loaded correctly.
I am really lost in front of this situation. I don't understand why all the pixel shaders return a #000000 color (I also tried using RGBA format but the result is the same).

GLSL 150 GL 3.2 black textures

I can't get my textures to work, all the screen is black.
Here is my code for loading the images, I use lodepng:
std::vector<unsigned char> image;
unsigned int error = lodepng::decode(image, w, h, filename);
GLuint texture_id;
glGenTextures(1, &texture_id);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture_id);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, 4, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, &image[0]);
glBindTexture(GL_TEXTURE_2D, 0);
For the rendering I do this:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture_id_from_above); //texture_id were checked, seemed fine
glUniform1i(shader_sampler_loc, GL_TEXTURE0);
and my frag shader(trimmed version) is basically doing this:
uniform sampler2D sampler;
void main(void) {
gl_FragColor = texture2D(sampler, uv_coord);
}
The UV-coordinates are fine, the vector from lodepng containes many elements and there is no error returned. To further pin the problem I tried this:
gl_FragColor = texture2D(sampler, uv_coord)*0.5 + vec4(1, 1, 1, 1)*0.5f
To see if the whole assignment is somehow skipped or the texture in fact black. As a result I still only get a black window. But by removing
glActiveTexture(GL_TEXTURE0); //x2, and
glUniform1i(sampler_loc, GL_TEXTURE0);
all my objects appear gray. I have no clue what is wrong.
BTW: it was working before moving to OpenGL 3.2 (had 2.1 before), and all images are ^2. I use CORE_PROFILE && FORWARD_COMPAT.
Vertex shader:
#version 150
//VBO vertex attributes
attribute vec3 pos;
attribute vec2 tex;
attribute vec3 normal;
varying vec2 uv_coord;
uniform mat4 mvp_mat;
void main(void) {
gl_Position = mvp_mat * vec4(pos, 1);
uv_coord = tex;
}
glUniform1i(shader_sampler_loc, GL_TEXTURE0);
. . . should be
glUniform1i(shader_sampler_loc, 0);
etc.
So I kind of solved it, by using OPENGL_COMPAT_PROFILE it works. Though I would really want to go full 3.2, and find which parts are deprecated...
EDIT:
In my case, I finally found the dumb error, I was using
glTexImage2D(GL_TEXTURE_2D, 0, 4, ... //instead of
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, ...
So I guess the with the old GL I was lucky, and with 3.2 the enum were changed?
As comments suggest try changing the following:
Vertex shader:
out instead of varying
Add layout(location = #) to your attributes and change attribute for in.
Make sure, number of locations match the code
Fragment shader: (Assuming since its not complete)
in instead of varying
Add layout(location = #) to your sampler uniform
For the code:
Change
glUniform1i(sampler_loc, GL_TEXTURE0)
to
glUniform1i(sampler_loc, 0)

Alpha channel value always returning 1.0 after rendering-to-texture in OpenGL

This problem is driving me crazy since the code was working perfectly before. I have a fragment shader which combines two textures based on the value set in the alpha channel. The output is rendered to a third texture using an FBO.
Since I need to perform a post-processing step on the combined texture, I check the value of the alpha channel to determine whether that texel will need post-processing or not (i.e., I'm using the alpha channel value as a mask). The problem is, the post-processing shader is reading a value of 1.0 for all the texels in the input texture!
Here is the fragment shader that combines the two textures:
uniform samplerRect tex1;
uniform samplerRect tex2;
in vec2 vTexCoord;
out vec4 fColor;
void main(void) {
vec4 color1, color2;
color1 = texture(tex1, vTexCoord.st);
color2 = texture(tex2, vTexCoord.st);
if (color1.a == 1.0) {
fColor = color2;
} else if (color2.a == 1.0) {
fColor = color1;
} else {
fColor = (color1 + color2) / 2.0;
}
}
The texture object that I attach to the FBO is set up as follows:
glGenTextures(1, &glBufferTex);
glBindTexture(GL_TEXTURE_RECTANGLE, glBufferTex);
glTexParameteri(GL_TEXTURE_RECTANGLE, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_RECTANGLE, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_RECTANGLE, 0, GL_RGBA8, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
Code that attaches the texture to the FBO is:
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_RECTANGLE, glBufferTex, 0);
I even added a call to glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE) before attaching the FBO! What could possibly be going wrong that is making the next stage fragment shader read 1.0 for all texels?!
NOTE: I did check that not all the values of the alpha channel for texels in the two textures that I combine are 1.0. Most of them actually are not.