I can't get my textures to work, all the screen is black.
Here is my code for loading the images, I use lodepng:
std::vector<unsigned char> image;
unsigned int error = lodepng::decode(image, w, h, filename);
GLuint texture_id;
glGenTextures(1, &texture_id);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture_id);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, 4, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, &image[0]);
glBindTexture(GL_TEXTURE_2D, 0);
For the rendering I do this:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture_id_from_above); //texture_id were checked, seemed fine
glUniform1i(shader_sampler_loc, GL_TEXTURE0);
and my frag shader(trimmed version) is basically doing this:
uniform sampler2D sampler;
void main(void) {
gl_FragColor = texture2D(sampler, uv_coord);
}
The UV-coordinates are fine, the vector from lodepng containes many elements and there is no error returned. To further pin the problem I tried this:
gl_FragColor = texture2D(sampler, uv_coord)*0.5 + vec4(1, 1, 1, 1)*0.5f
To see if the whole assignment is somehow skipped or the texture in fact black. As a result I still only get a black window. But by removing
glActiveTexture(GL_TEXTURE0); //x2, and
glUniform1i(sampler_loc, GL_TEXTURE0);
all my objects appear gray. I have no clue what is wrong.
BTW: it was working before moving to OpenGL 3.2 (had 2.1 before), and all images are ^2. I use CORE_PROFILE && FORWARD_COMPAT.
Vertex shader:
#version 150
//VBO vertex attributes
attribute vec3 pos;
attribute vec2 tex;
attribute vec3 normal;
varying vec2 uv_coord;
uniform mat4 mvp_mat;
void main(void) {
gl_Position = mvp_mat * vec4(pos, 1);
uv_coord = tex;
}
glUniform1i(shader_sampler_loc, GL_TEXTURE0);
. . . should be
glUniform1i(shader_sampler_loc, 0);
etc.
So I kind of solved it, by using OPENGL_COMPAT_PROFILE it works. Though I would really want to go full 3.2, and find which parts are deprecated...
EDIT:
In my case, I finally found the dumb error, I was using
glTexImage2D(GL_TEXTURE_2D, 0, 4, ... //instead of
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, ...
So I guess the with the old GL I was lucky, and with 3.2 the enum were changed?
As comments suggest try changing the following:
Vertex shader:
out instead of varying
Add layout(location = #) to your attributes and change attribute for in.
Make sure, number of locations match the code
Fragment shader: (Assuming since its not complete)
in instead of varying
Add layout(location = #) to your sampler uniform
For the code:
Change
glUniform1i(sampler_loc, GL_TEXTURE0)
to
glUniform1i(sampler_loc, 0)
Related
I'm trying to implement deferred rendering within an engine I'm developing as a personal learning, and I cannot get to understand what I'm doing wrong when it comes to render all the textures in the GBuffer to check if the implementation is okay.
The thing is that I currently have a framebuffer with 3 color attachments for the different textures of the GBuffer (color, normal and position), which I initialize as follows:
glCreateFramebuffers(1, &id);
glBindFramebuffer(GL_FRAMEBUFFER, id);
std::vector<uint> textures;
textures.resize(3);
glCreateTextures(GL_TEXTURE_2D, 3, textures.data());
for(size_t i = 0; i < 3; ++i)
{
glBindTexture(GL_TEXTURE_2D, textures[i]);
if(i == 0) // For Color Buffer
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, nullptr);
else
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, width, height, 0, GL_RGBA, GL_FLOAT, nullptr);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, GL_TEXTURE_2D, textures[i], 0);
}
GLenum color_buffers[3] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2 };
glDrawBuffers((GLsizei)textures.size(), color_buffers);
uint depth_texture;
glCreateTextures(GL_TEXTURE_2D, 1, &depth_texture);
glBindTexture(GL_TEXTURE_2D, depth_texture);
glTexStorage2D(GL_TEXTURE_2D, 1, GL_DEPTH24_STENCIL8, width, height);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_TEXTURE_2D, depth_texture, 0);
bool fbo_status = glCheckFramebufferStatus(GL_FRAMEBUFFER) == GL_FRAMEBUFFER_COMPLETE;
ASSERT(fbo_status, "Framebuffer Incompleted!");
glBindFramebuffer(GL_FRAMEBUFFER, 0);
This is not reporting any errors and it seems to work since the framebuffer of the forward renderer renders properly. Then, when rendering, I run the next code after binding the framebuffer and clearing the color and depth buffers:
camera_buffer->Bind();
camera_buffer->SetData("ViewProjection", glm::value_ptr(viewproj_mat));
camera_buffer->SetData("CamPosition", glm::value_ptr(glm::vec4(view_position, 0.0f)));
camera_buffer->Unbind();
for(Entity& entity : scene_entities)
{
shader->Bind();
Texture* texture = entity.GetTexture();
BindTexture(0, texture);
shader->SetUniformMat4("u_Model", entity.transform);
shader->SetUniformInt("u_Albedo", 0);
shader->SetUniformVec4("u_Material.AlbedoColor", entity->AlbedoColor);
shader->SetUniformFloat("u_Material.Smoothness", entity->Smoothness);
glBindVertexArray(entity.VertexArray);
glDrawElements(GL_TRIANGLES, entity.VertexArray.index_buffer.count, GL_UNSIGNED_INT, nullptr);
// Shader, VArray and Textures Unbindings
}
So with this code I manage to render the 3 textures created by using the ImGui::Image function, by switching the texture index between 0, 1 or 2 as the next:
ImGui::Image((ImTextureID)(fbo->textures[0]), viewport_size, ImVec2(0, 1), ImVec2(1, 0));
Now, the color texture (at index 0) works perfectly, as the next image shows:
But when rendering the normals and position textures (indexes 2 and 3), I have no result:
Does anybody sees what I'm doing wrong? Because I've been hours and hours with this and I cannot see it. I ran this on RenderDoc and I couldn't see anything wrong, the textures displayed in RenderDoc are the same than in the engine.
The vertex shader I use when rendering the entities is the next:
layout(location = 0) in vec3 a_Position;
layout(location = 1) in vec2 a_TexCoord;
layout(location = 2) in vec3 a_Normal;
out IBlock
{
vec2 TexCoord;
vec3 FragPos;
vec3 Normal;
} v_VertexData;
layout(std140, binding = 0) uniform ub_CameraData
{
mat4 ViewProjection;
vec3 CamPosition;
};
uniform mat4 u_ViewProjection = mat4(1.0);
uniform mat4 u_Model = mat4(1.0);
void main()
{
vec4 world_pos = u_Model * vec4(a_Position, 1.0);
v_VertexData.TexCoord = a_TexCoord;
v_VertexData.FragPos = world_pos.xyz;
v_VertexData.Normal = transpose(inverse(mat3(u_Model))) * a_Normal;
gl_Position = ViewProjection * u_Model * vec4(a_Position, 1.0);
}
And the fragment one is the next, they are both pretty simple:
layout(location = 0) out vec4 gBuff_Color;
layout(location = 1) out vec3 gBuff_Normal;
layout(location = 2) out vec3 gBuff_Position;
in IBlock
{
vec2 TexCoord;
vec3 FragPos;
vec3 Normal;
} v_VertexData;
struct Material
{
float Smoothness;
vec4 AlbedoColor;
};
uniform Material u_Material = Material(1.0, vec4(1.0));
uniform sampler2D u_Albedo, u_Normal;
void main()
{
gBuff_Color = texture(u_Albedo, v_VertexData.TexCoord) * u_Material.AlbedoColor;
gBuff_Normal = normalize(v_VertexData.Normal);
gBuff_Position = v_VertexData.FragPos;
}
It is not clear from the question what exactly might be happening here, as lots of GL states - both at the time the rendering to the gbuffer, and at that time the gbuffer texture is rendered for visualization - are just unknown. However, from the images given in the question, one can not conclude that the actual color output for attachments 1 and 2 is not working.
One issue which comes to mind is alpha blending. The color values processed by the per-fragment operations after the vertex shader are always working with RGBA values - although the value of the A channel only matters if you enabled blending and use a blend function which somehow depends on the source alpha.
If you declare a custom fragment shader output as float, vec2, vec3, the remaining components stay undefined (undefined value, not undefined behavior). This does not impose a problem unless some other operations you do depend on those values.
What we also have here is a GL_RGBA16F output format (which is the right choice, because none of the 3-component RGB formats are required as color-renderable by the spec).
What might happen here is either:
Alpha blending is already turned on during rendering into the g-buffer. The fragment shader's alpha output happens to be zero, so that it appears as 100% transparent and the contents of the texture are not changed.
Alpha blending is not used during rendering into the g-buffer, so the correct contents end up in the texture, the alpha channel just happens to end up with all zeros. Now the texture might be visualized with alpha blending enbaled, ending up in a 100% transparent view.
If it is the first option, turn off blending when rendering the into the g-buffer. It would not work with deferred shading anyway. You might still run into the second option then.
If this is the second option, there is no issue at all - the lighting passes which follow will read the data they need (and ultimately, you will want to put useful information into the alpha channel to not waste it and be able to reduce the number of attachments). It is just your visualization (which I assume is for debug purposed only) is wrong. You can try to fix the visualization.
As a side note: Storing the world space position in the G-Buffer is a huge waste of bandwidth. All you need to be able to reconstruct the world space position is the depth value and the inverse of your view and projection matrices. Also storing world space position in GL_RGB16F will very easily run into precision issues if you move your camera away from world space origin.
I'm studying clipmap algorithm, and I want to get elevations by VTF.
But I've got a problem when using vertex textures. I don't know what's wrong.
the related code is like this:
int width=127;
float *data=new float[width*width];
for(int i=0;i<width*width;i++)
data[i]=float(rand()%100)/100.0;
glGenTextures(1, &vertexTexture);
//glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, vertexTexture);
//glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
//glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_NEAREST_MIPMAP_NEAREST);
glTexImage2D(
GL_TEXTURE_2D, 0, GL_LUMINANCE_FLOAT32_ATI,
width, width, 0, GL_LUMINANCE, GL_FLOAT, data);
the GLSL code in vertex shader is like this:
#version 410
uniform mat4 mvpMatrix;
uniform sampler2D vertexTexture;
in vec2 vVertex;
void main(void)
{
vec2 vTexCoords=vVertex/127.0;
float height = texture2DLod(vertexTexture, vTexCoords,0.0).x*100.0;
// I also tried texture2D(vertexTexture, vTexCoords)
// and texture(vertexTexture, vTexCoords),but they don't work.
vec4 position=vec4(vVertex.x,height,vVertex.y,1.0);
gl_Position = mvpMatrix * position;
}
I store some random floats in array data then store them with a texture,and as the vertex shader showing,I want to get some values to the y coordinate by VTF.but the result is that the height is always 0.I think something must be wrong. I don't know what's wrong and how to do it the right way.
it's solved now.the answer is here.thank you all!
Try setting the texture's minification filter to GL_NEAREST or GL_LINEAR after glTexImage2D():
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
The OpenGL default is to use mipmaps and you didn't send any which makes the texture incomplete and will disable that texture image unit.
Then you can use texture(vertexTexture, vTexCoords) inside the shader instead of the deprecated texture2DLOD() version with the explicit LOD access.
I am having problems getting the correct texture coordinate to sample my shadow map. Looking at my code, the problem appears to be from incorrect matrices. This is the fragment shader for the rendering pass where I do shadows:
in vec2 st;
uniform sampler2D colorTexture;
uniform sampler2D normalTexture;
uniform sampler2D depthTexture;
uniform sampler2D shadowmapTexture;
uniform mat4 invProj;
uniform mat4 lightProj;
uniform vec3 lightPosition;
out vec3 color;
void main () {
vec3 clipSpaceCoords;
clipSpaceCoords.xy = st.xy * 2.0 - 1.0;
clipSpaceCoords.z = texture(depthTexture, st).x * 2.0 - 1.0;
vec4 position = invProj * vec4(clipSpaceCoords,1.0);
position.xyz /= position.w;
//At this point, position.xyz seems to be what it should be, the world space coordinates of the pixel. I know this because it works for lighting calculations.
vec4 lightSpace = lightProj * vec4(position.xyz,1.0);
//This line above is where I think things go wrong.
lightSpace.xyz /= lightSpace.w;
lightSpace.xyz = lightSpace.xyz * 0.5 + 0.5;
float lightDepth = texture(shadowmapTexture, lightSpace.xy).x;
//Right here lightDepth seems to be incorrect. The only explanation I can think of for this is if there is a problem in the above calculations leading to lightSpace.xy.
float shadowFactor = 1.0;
if(lightSpace.z > lightDepth+0.0005) {
shadowFactor = 0.2;
}
color = vec3(lightDepth);
}
I have removed all the code irrelevant to shadowing from this shader (Lighting, etc). This is the code I use to render the final pass:
glCullFace(GL_BACK);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
postShader->UseShader();
postShader->SetUniform1I("colorTexture", 0);
postShader->SetUniform1I("normalTexture", 1);
postShader->SetUniform1I("depthTexture", 2);
postShader->SetUniform1I("shadowmapTexture", 3);
//glm::vec3 cp = camera->GetPosition();
postShader->SetUniform4FV("invProj", glm::inverse(camera->GetCombinedProjectionView()));
postShader->SetUniform4FV("lightProj", lights[0].camera->GetCombinedProjectionView());
//Again, if I had to guess, these two lines above would be part of the problem.
postShader->SetUniform3F("lightPosition", lights[0].x, lights[0].y, lights[0].z);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, frameBuffer->GetColor());
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, frameBuffer->GetNormals());
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, frameBuffer->GetDepth());
glActiveTexture(GL_TEXTURE3);
glBindTexture(GL_TEXTURE_2D, lights[0].shadowmap->GetDepth());
this->BindPPQuad();
glDrawArrays(GL_TRIANGLES, 0, 6);
In case it is relevant to my problem, here is how I generate the depth framebuffer attachments for the depth and shadow maps:
void FrameBuffer::Init(int textureWidth, int textureHeight) {
glGenFramebuffers(1, &fbo);
glGenTextures(1, &depth);
glBindTexture(GL_TEXTURE_2D, depth);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, textureWidth, textureHeight, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depth, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
}
Where is the problem in my math or my code, and what can I do to fix it?
After some experimentation, I have found that my problem does not lie in my matrices, but in my clamping. It seems that I get strange values when I use GL_CLAMP or GL_CLAMP_TO_EDGE, but I get almost correct values when I use GL_CLAMP_TO_BORDER. There are more problems, but they do not seem to be matrix related as I thought.
This problem is driving me crazy since the code was working perfectly before. I have a fragment shader which combines two textures based on the value set in the alpha channel. The output is rendered to a third texture using an FBO.
Since I need to perform a post-processing step on the combined texture, I check the value of the alpha channel to determine whether that texel will need post-processing or not (i.e., I'm using the alpha channel value as a mask). The problem is, the post-processing shader is reading a value of 1.0 for all the texels in the input texture!
Here is the fragment shader that combines the two textures:
uniform samplerRect tex1;
uniform samplerRect tex2;
in vec2 vTexCoord;
out vec4 fColor;
void main(void) {
vec4 color1, color2;
color1 = texture(tex1, vTexCoord.st);
color2 = texture(tex2, vTexCoord.st);
if (color1.a == 1.0) {
fColor = color2;
} else if (color2.a == 1.0) {
fColor = color1;
} else {
fColor = (color1 + color2) / 2.0;
}
}
The texture object that I attach to the FBO is set up as follows:
glGenTextures(1, &glBufferTex);
glBindTexture(GL_TEXTURE_RECTANGLE, glBufferTex);
glTexParameteri(GL_TEXTURE_RECTANGLE, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_RECTANGLE, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_RECTANGLE, 0, GL_RGBA8, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
Code that attaches the texture to the FBO is:
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_RECTANGLE, glBufferTex, 0);
I even added a call to glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE) before attaching the FBO! What could possibly be going wrong that is making the next stage fragment shader read 1.0 for all texels?!
NOTE: I did check that not all the values of the alpha channel for texels in the two textures that I combine are 1.0. Most of them actually are not.
I'm currently working on a simple 3D scene in OpenGL3.3, but when trying to texture the objects - all of them are textured entirely black. However, if I change the context version to 3.1; it has no problem rendering the textures correctly over the models.
I'm not sure if this suggests I'm using deprecated functionality/methods, but I'm struggling to see where the problem could be.
Setting up the texture
(load texture from file)
...
glGenTextures(1, &TexID); // Create The Texture ( CHANGE )
glBindTexture(GL_TEXTURE_2D, TexID);
glTexImage2D(GL_TEXTURE_2D, 0, texture_bpp / 8, texture_width, texture_height, 0, texture_type, GL_UNSIGNED_BYTE, texture_imageData);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
...
Binding the Texture to Render
// mLocation is the layout location in the shader, from glGetUniformLocation
// mTextureUnit is the specified texture unit to load into. Currently using 0.
// mTextureID is the ID of the loaded texture, as generated above.
glActiveTexture( GL_TEXTURE0 + mData.mTextureUnit );
glBindTexture( GL_TEXTURE_2D, mData.mTextureID );
glUniform1i( mLocation, mData.mTextureUnit );
Fragment Shader
uniform sampler2D diffusemap;
in vec2 passUV;
out vec3 outColour;
...
outColour = texture( diffusemap, passUV ).rgb;
All textures being used are power of 2, square sizes.
Images showing the problem.
GL3.1: http://i.imgur.com/NUgj6vA.png
GL3.3: http://i.imgur.com/oOc0jcd.png
Vertex Shader
#version 330 core
uniform mat4 p;
uniform mat4 v;
uniform mat4 m;
in vec3 vertex;
in vec3 normal;
in vec2 uv;
out vec3 passVertex;
out vec3 passNormal;
out vec2 passUV;
void main( void )
{
gl_Position = p * v * m * vec4( vertex, 1.0 );
passVertex = vec3( m * vec4( vertex, 1.0 ) );
passNormal = vec3( m * vec4( normal, 1.0 ) );
passUV = uv;
}
In the line:
glTexImage2D(GL_TEXTURE_2D, 0, texture_bpp / 8, texture_width, texture_height, 0, texture_type, GL_UNSIGNED_BYTE, texture_imageData);
The assumption that (texture_bpp / 8) will return the correct format type is incorrect. It should be one of the GLenum values that specifies the internal-format such as GL_RGBA.
Correcting it to (or whichever format matches the internal-format of the texture file) fixes the issue entirely and works on both GL3.3 and GL3.1:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, texture_width, texture_height, 0, texture_type, GL_UNSIGNED_BYTE, texture_imageData);
For the sake of completeness, the internal format of a texture should be an enumerator. And one of the sized enumerators, not one of the unsized ones. Please stop using GL_RGB when you can use GL_RGB8.
Answer correctly identifies the issue, but it would be helpful to have it explained why the previous assumption would work on 3.1 and not on 3.3.
The ability to use a number on the range [1, 4] was deprecated in OpenGL 3.0 and removed in OpenGL 3.1. However, at that time, there wasn't a way to say, "give me the actual core profile of OpenGL version 3.1"; the WGL/GLX_CONTEXT_CORE_PROFILE_BIT_ARBs didn't exist. Therefore, when you got a 3.1 context, it was perfectly legal for an implementation to export the ARB_compatibility extension, which still allowed all of the removed functionality.
In 3.2, the ability to explicitly select a profile was added to OpenGL. At which point, you would not get someone exposing ARB_compatibility in a core profile. That's why your code works when you ask for 3.1 (since it's free to give you 3.1 compatibility), but not when you ask for 3.3 core profile.