I am writing an openGL program and have multiple models drawn into the environment, but I don't know how to apply different textures to each of those models, so for now they all have the same texture. I've read that I need to add multiple texture units into the program or use a texture atlas. A texture atlas seems more complicated, so I'm trying to add texture units.
The way I thought this process worked is:
Generate two texture units with glGenTextures.
Bind the data of the first image to the first texture unit with glBindTexture and glTexImage2D.
Do the same with the second image.
From here, I thought I could tell openGL which texture unit I want to use using glActiveTexture. This seems to work with just one texture (i.e. skip step 3) but fails with two or more.
I'm sure I'm missing something, so could someone point me in the right direction?
//Generate textures
int texturec=2;
int w[texturec],h[texturec];
unsigned char *data[texturec];
data[0]=getImage(&w[0],&h[0],"resources/a.png");
data[1]=getImage(&w[1],&h[1],"resources/b.png");
//Apply textures
GLuint textures[texturec];
glGenTextures(texturec, textures);
//Bind a.png to the first texture unit
glBindTexture(GL_TEXTURE_2D, textures[0]);
glTexImage2D(GL_TEXTURE_2D, 0,GL_RGB, w[0], h[0], 0, GL_RGB, GL_UNSIGNED_BYTE, data[0]);
//Bind b.png to the second texture unit
glBindTexture(GL_TEXTURE_2D, textures[1]);
glTexImage2D(GL_TEXTURE_2D, 0,GL_RGB, w[1], h[1], 0, GL_RGB, GL_UNSIGNED_BYTE, data[1]);
glActiveTexture(GL_TEXTURE0);
//Not super clear on what this does, but it needs to be here.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
And here are my shaders:
Fragment:
#version 330 core
in vec2 UV;
out vec3 color;
uniform sampler2D textureSampler;
void main(){
color=texture(textureSampler,UV).rgb;
}
Vertex:
#version 330 core
layout(location=0) in vec3 vertexPos;
layout(location=1) in vec2 vertexUV;
out vec2 UV;
uniform mat4 MVP;
void main(){
gl_Position=MVP*vec4(vertexPos,1);
UV=vertexUV;
}
Edit:
Here is my new code after applying Rabid76's suggestions:
//Generate textures
int texturec=2;
int w[texturec],h[texturec];
unsigned char *data[texturec];
data[0]=getImage(&w[0],&h[0],"resources/a.png");
data[1]=getImage(&w[1],&h[1],"resources/b.png");
//Apply textures
GLuint textures[texturec];
glGenTextures(texturec, textures);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textures[0]);
glTexImage2D(GL_TEXTURE_2D, 0,GL_RGB, w[0], h[0], 0, GL_RGB, GL_UNSIGNED_BYTE, data[0]);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, textures[1]);
glTexImage2D(GL_TEXTURE_2D, 0,GL_RGB, w[1], h[1], 0, GL_RGB, GL_UNSIGNED_BYTE, data[1]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
//Shaders
programID=loadShaders("shaders/vertex.shader","shaders/fragment.shader");
glUseProgram(programID);
//Use texture unit 1
glActiveTexture(GL_TEXTURE1);
GLint texLoc=glGetUniformLocation(programID, "textureSampler");
glUniform1i(texLoc, 1);
Generate two texture units with glGenTextures.
No. glGenTextures doesn't generate a texture unit.
glGenTextures reserves name value(s), which can be used for texture objects.
glBindTexture binds a named texture to a texturing target. When this function is called, then the texture object is bound to the current texture unit.
The current texture unit can be set by glActiveTexture:
e.g.
GLuint textures[texturec];
glGenTextures(texturec, textures);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textures[0]);
// [...]
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, textures[1]);
// [...]
The texture objects name value and the texture unit are completely different things.
The texture unit is the binding point between the texture object and the shader program.
On the one side, the texture object has to be bound to the texture unit, on the other side the texture unit has to be set to the texture sampler uniform. So the texture unit is the "link" in between.
Since GLSL version 4.2, the texture unit can be set by a Layout Qualifier (GLSL) within the shader. The Binding point corresponds to the texture unit. e.g. binding = 1 means texture unit 1:
layout(binding = 1) uniform sampler2D textureSampler;
Alternatively the texture unit index can be assigned to the texture sampler uniform by glUniform1i:
GLint texLoc = glGetUniformLocation(programID, "textureSampler");
glUniform1i(texLoc, 1);
If the shader program uses 1 texture sampler uniform
uniform sampler2D textureSampler;
then it is sufficient to bind the proper texture object before the draw call:
glActiveTexture(GL_TEXTURE0); // this is default
glBindTexture(GL_TEXTURE_2D, textures[0]);
But if the shader program uses multiple texture sampler uniforms (of the same target), then the different texture objects have to be bound to different texture units:
e.g.
layout(binding = 3) uniform sampler2D textureSampler1;
layout(binding = 4) uniform sampler2D textureSampler2;
glActiveTexture(GL_TEXTURE3);
glBindTexture(GL_TEXTURE_2D, textures[0]);
glActiveTexture(GL_TEXTURE4);
glBindTexture(GL_TEXTURE_2D, textures[1]);
respectively
layout(location = 7) uniform sampler2D textureSampler1;
layout(location = 8) uniform sampler2D textureSampler2;
glActiveTexture(GL_TEXTURE3);
glBindTexture(GL_TEXTURE_2D, textures[0]);
glActiveTexture(GL_TEXTURE4);
glBindTexture(GL_TEXTURE_2D, textures[1]);
glUseProgram(programID);
glUniform1i(7, 3); // location = 7 <- texture unit 3
glUniform1i(8, 4); // location = 8 <- texture unit 4
Related
Now I have a noise texture generated by this website: https://aeroson.github.io/rgba-noise-image-generator/. I want to use 4 uniform samplers in my computing shader to get 4 random rgba values from a single noise texture. My computing shader source codes look like:
#version 430 core
layout (local_size_x = 1, local_size_y = 1, local_size_z = 1) in;
uniform sampler2D noise_r0;
uniform sampler2D noise_i0;
uniform sampler2D noise_r1;
uniform sampler2D noise_i1;
layout (binding = 0, rgba32f) writeonly uniform image2D tilde_h0k;
layout (binding = 1, rgba32f) writeonly uniform image2D tilde_h0minusk;
uniform int N = 256;
....
// Box-Muller algorithm
vec2 texCoord = vec2(gl_GlobalInvocationID.xy) / float(N); // Here every invocation refers to a pixel of output image
float noise00 = clamp(texture(noise_r0, texCoord).r, 0.001, 1.0);
float noise01 = clamp(texture(noise_i0, texCoord).r, 0.001, 1.0);
float noise02 = clamp(texture(noise_r1, texCoord).r, 0.001, 1.0);
float noise03 = clamp(texture(noise_i1, texCoord).r, 0.001, 1.0);
....
and in the main program, I use this code to upload my downloaded noise texture to the computing shader:
unsigned int tex_noise;
glGenTextures(1, &tex_noise);
glBindTexture(GL_TEXTURE_2D, tex_noise);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
int w_noise, h_noise, nrChannels;
unsigned char* data = stbi_load("noise.png", &w_noise, &h_noise, &nrChannels, 0);
if (data) {
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, w_noise, h_noise, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
glGenerateMipmap(GL_TEXTURE_2D);
}
else std::cout << "Failed to load noise texture." << std::endl;
stbi_image_free(data);
....
....
My question is: Can I set up those sampler2D's in computing shader using this code?
glUseProgram(computeProgram);
glActiveTexture(GL_TEXTURE1);
glUniform1i(glGetUniformLocation(computeProgram, "noise_r0"), 1);
glActiveTexture(GL_TEXTURE2);
glUniform1i(glGetUniformLocation(computeProgram, "noise_i0"), 2);
glActiveTexture(GL_TEXTURE3);
glUniform1i(glGetUniformLocation(computeProgram, "noise_r1"), 3);
glActiveTexture(GL_TEXTURE4);
glUniform1i(glGetUniformLocation(computeProgram, "noise_i1"), 4);
If it is wrong, what should I do to set up those sampler2D's, and make sure that the random rgba values I get from those sampler2D's are not the same? (cause if they are the same, the Box-Muller algorithm won't work).
Thanks so much for your help!
The type of the uniform is ìmage2D, not sampler2D. To load and store an image, you must bind the texture to an image unit using glBindImageTexture. See Image Load Store. e.g.:
glBindImageTexture(1, tex_noise, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_RGBA32F);
If you want to bind a texture to a texture unit you need to select active texture unit with glActiveTexture and then bind the texture with glBindTexture:
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, texture_object);
You can access this texture in the shader with a uniform of type sampler2D and the texture* functions:
layout(binding = 1) uniform sampler2D myTextue;
If you want to bind a texture to a image unit, you have to use glBindImageTexture:
glBindImageTexture(1, texture_object, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_RGBA32F);
You can access this texture in the shader with a uniform of type iamge2D and and the image* functions:
layout (binding = 1, rgba32f) writeonly uniform image2D myImage;
I have a program that takes the following texture:
which is generated via FreeType2. Basically, it's creating a texture atlas for every character that I've requested to be drawn. As you can see, the characters are bright and clear. In fact, you can see that the top-leftmost pixel of the lowercase 'i' has a value of 71 (out of 255) or 0.7098 when I inspect the texture in RenderDoc.
Next, the engine blits letters onto a Framebuffer Object. This is done via textured quads. The vertex shader:
#version 330
layout(location=0) in vec2 inVertexPosition;
layout(location=1) in vec2 inTexelCoords;
layout(location=2) in float inDepth;
out vec2 texelCoords;
out float depth;
void main()
{
gl_Position = vec4(inVertexPosition.x,-inVertexPosition.y, 0.0, 1.0);
texelCoords = vec2(inTexelCoords.x,1-inTexelCoords.y);
depth = inDepth;
}
And the frag shader:
#version 330
layout(location=0) out vec4 frag_colour;
in vec2 texelCoords;
in float depth;
uniform sampler2D uTexture;
uniform vec4 uTextColor;
void main()
{
vec4 c = texture(uTexture,texelCoords);
frag_colour = uTextColor * c.r;
gl_FragDepth = depth;
}
As you can see, it's sets the pixel color to be a factor of the red channel.
However, when I view the contents of the FBO via RenderDoc, and saved out to file here, you see this:
If you look at this without transparency (just a second layer added underneath in Gimp to illustrate better):
You can see that the text is a little faded compared to what it was before. If you look at the top-leftmost pixel of the lowercase 'i', it's now a value of 50.2, or for a range of 0-1 it's 0.50196 (via RenderDoc).
Next, when the FBO is finally put onto the screen via another textured quad it fades even more. First here's the vertex shader:
#version 330
layout(location=0) in vec2 inVertexPosition;
layout(location=1) in vec2 inTexelCoords;
varying vec2 texelCoords;
void main()
{
gl_Position = vec4(inVertexPosition.x,-inVertexPosition.y, 0.0, 1.0);
texelCoords = vec2(inTexelCoords.x,1-inTexelCoords.y);
}
and the fragment shader:
#version 330
precision highp float;
layout(location=0) out vec4 frag_colour;
varying vec2 texelCoords;
uniform sampler2D uTexture;
void main()
{
vec4 c = texture(uTexture,texelCoords);
frag_colour = c;
}
The results, as I said are more faded than before:
original:
gimp background for clarity:
now that pixel has a value of 25.1 or 0.05139.
What is causing this fading after every render?
I think it's important to note that the brighter areas don't fade.
My Framebuffer creation code
glGenFramebuffers(1, &m_framebuffer);
glGenTextures(1, &m_fboColorAttachment);
glGenTextures(1, &m_fboAdditionalInfo);
glGenTextures(1, &m_fboDepthStencil);
glCall(glBindFramebuffer,GL_FRAMEBUFFER, m_framebuffer);
/* setup color output 0 */
glBindTexture(GL_TEXTURE_2D, m_fboColorAttachment);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
glCall(glTexImage2D, GL_TEXTURE_2D, 0, GL_RGBA8, screenDimensionsX, screenDimensionsY, 0, GL_BGRA, GL_UNSIGNED_BYTE, nullptr);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_fboColorAttachment, 0);
glBindTexture(GL_TEXTURE_2D, 0);
/* setup color output 1 */
glBindTexture(GL_TEXTURE_2D, m_fboAdditionalInfo);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
glCall(glTexImage2D, GL_TEXTURE_2D, 0, GL_R32UI, screenDimensionsX, screenDimensionsY, 0, GL_RED_INTEGER, GL_UNSIGNED_INT, nullptr);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, m_fboAdditionalInfo, 0);
glBindTexture(GL_TEXTURE_2D, 0);
/* setup depth and stencil */
glBindTexture(GL_TEXTURE_2D, m_fboDepthStencil);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
glCall(glTexImage2D, GL_TEXTURE_2D, 0, GL_DEPTH24_STENCIL8, screenDimensionsX, screenDimensionsY, 0, GL_DEPTH_STENCIL, GL_UNSIGNED_INT_24_8, nullptr);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_TEXTURE_2D, m_fboDepthStencil, 0);
glBindTexture(GL_TEXTURE_2D, 0);
The initial (red) texture creation:
glActiveTexture(GL_TEXTURE0);
glGenTextures(1,&textureData.texture);
glBindTexture(GL_TEXTURE_2D,textureData.texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, 500, 500, 0, GL_RED, GL_UNSIGNED_BYTE, 0);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
My blending is done as
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
has a value of 71 (out of 255) or 0.7098
No idea, what you even mean here. 71/255 would be 0.278. 0.7098 normalized would be 181 out of 255. Looks like your "out of 255" are just percentage values, out of 100%.
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
You use that color values from the red channel also as alpha, so, when you have blending enabled, you end up with 0.7098*0.7098=.5038. Since the result will be rounded to the nearest reprersentable value, we rather end with 181/255 * 181/255 = .50382 rounded to 128/255
it's now a value of 50.2, or for a range of 0-1 it's 0.50196
128/255 is 0.50196078....
So the solution is: disable blending for all steps when you don't need it. Or if you need it, set useful alpha values.
Side note:
now that pixel has a value of 25.1 or 0.05139.
No Idea what this means, the 25.1 does not relate to 0.05139 in any obvious way, you definitively switched the meaning of those values again.
I have two framebuffers that I am rendering two different objects to. When I use the default framebuffer, I have both the objects rendering on the same one.
I want this behaviour to work when using multiple framebuffers! How do I merge two framebuffers and render the winning fragments on top (Depth tested)! Basically like a Photoshop Layer Merge but with depth testing!
I got as far as blitting a single framebuffer onto the default framebuffer, but I'm lost as to how I would merge two framebuffers together!
Note: I have a color and a depth attachment to the framebuffers.
Edit:
Alright. I almost have the setup of rendering to a quad working except for one little thing. My color buffes are properly sent to the shader using uniform samplers but my depth values return '0' all the time from the depth buffers.
This is how I have my depth buffers setup within the framebuffer.
glGenFramebuffers(1, &_fbo);
glBindFramebuffer(GL_FRAMEBUFFER, _fbo);
glGenTextures(1, &_cbo);
glGenTextures(1, &_dbo);
{
glBindTexture(GL_TEXTURE_2D, _cbo);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, dim.x, dim.y, 0, GL_RGBA, GL_UNSIGNED_BYTE, nullptr);
}
{
glBindTexture(GL_TEXTURE_2D, _dbo);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, dim.x, dim.y, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, nullptr);
}
glBindFramebuffer(GL_FRAMEBUFFER, _fbo);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, _cbo, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, _dbo, 0);
This is how I send uniform samplers to the shader.
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D,cbo1);
glUniform1i(glGetUniformLocation(QuadShader.Program, "color1"),0);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, cbo2);
glUniform1i(glGetUniformLocation(QuadShader.Program, "color2"), 1);
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, dbo1);
glUniform1i(glGetUniformLocation(QuadShader.Program, "depth1"), 2);
glActiveTexture(GL_TEXTURE3);
glBindTexture(GL_TEXTURE_2D, dbo2);
glUniform1i(glGetUniformLocation(QuadShader.Program, "depth2"), 3);
glBindVertexArray(BgVao);
glDrawArrays(GL_TRIANGLES, 0, 6);
This is how my shader looks:
uniform sampler2D color1;
uniform sampler2D color2;
uniform sampler2D depth1;
uniform sampler2D depth2;
out vec4 FragColor;
void main()
{
ivec2 texcoord = ivec2(floor(gl_FragCoord.xy));
vec4 depth1 = texelFetch(depth1, texcoord,0);
vec4 depth2 = texelFetch(depth2, texcoord,0);
if(depth1.z > depth2.z)
{
FragColor = texelFetch(color1, texcoord, 0);
}
else
{
FragColor = texelFetch(color2, texcoord, 0);
}
}
You will need a shader to achieve this. There is no built-in way to blit with depth values. Here is one way to do it that combines the contents of both FBOs.
Vertex shader (assumes a quad is drawn from (-1,-1) to (1,1) )
layout(location = 0) in vec4 Position;
void main()
{
// Snap the input coordinates to a quad with it lower-left at (-1, -1, 0)
// and its top-right at (1, 1, 0)
gl_Position = vec4(sign(Position.xy), 0.0, 1.0);
}
The pixel shader could look like this:
uniform sampler2D Color0;
uniform sampler2D Color1;
uniform sampler2D Depth0;
uniform sampler2D Depth1;
in vec2 TexCoords;
layout(location = 0) out vec4 FragColor;
void main()
{
ivec2 texcoord = ivec2(floor(gl_FragCoord.xy));
float depth0 = texelFetch(Depth0, texcoord, 0).r;
float depth1 = texelFetch(Depth1, texcoord, 0).r;
// possibly reversed depending on your depth buffer ordering strategy
if (depth0 < depth1) {
FragColor = texelFetch(Color0, texcoord, 0);
} else {
FragColor = texelFetch(Color1, texcoord, 0);
}
}
See also OpenGL - How to access depth buffer values? - Or: gl_FragCoord.z vs. Rendering depth to texture for how to access the depth texture.
Note that I use texelFetch() here because linearly interpolating depth values does not give valid results.
Blitting will never use the depth test, so you have to use a full-screen shader pass to combine both framebuffers. There are two options:
Combine both framebuffers into a third one. This requires that both the color attachments as well as the depth attachments of both input FBOs are textures. You then render a full-screen quad and sample from both color buffers and depth textures. You basically do the depth test manually in the shader by comparing the two depth to decide which of the two color values you use as final output color for the fragment.
You composite one of the framebuffers into the other, using the real depth test. In that case, only one of the FBOs has to use textures, the other one can use renderbuffers or the window-system provided buffers. You just have to render a full-screen quad, this time sampling the depth and color textures only from one input FBO, and render into the output FBO with depth testing enabled. YOu just set the color value as output of the fragment shader and additionally output the depth value to gl_FragDepth.
This should be possible with a fragment shader that uses the color data from both framebuffers and the depth data from both framebuffers and fills in each pixel by evaluating, for each fragment, the texture corresponding to the fragment that won the depth test.
#version 430
layout(location = 0) in vec2 tex_coord;
layout(binding = 0) uniform sampler2D color_texture_1;
layout(binding = 1) uniform sampler2D color_texture_2;
layout(binding = 2) uniform sampler2D depth_texture_1;
layout(binding = 3) uniform sampler2D depth_texture_2;
layout(location = 0) out vec4 fragment_color;
void main() {
float depth_1 = texture(depth_texture_1, tex_coord).z;
float depth_2 = texture(depth_texture_2, tex_coord).z;
if(!/*check to verify *something* was rendered here in either framebuffer*/)
discard;
else {
if(depth_1 > depth_2) //I'm pretty sure positive z values face the user
fragment_color = texture(color_texture_1, tex_coord);
else
fragment_color = texture(color_texture_2, tex_coord);
}
}
You'd render a (full-screen, presumably) quad, use a pass-through vertex shader with this fragment shader, and attach the respective textures.
I don't know what format your Depth Texture is in; my assumption is that it contains vectors representing the individual fragment coordinates, and that the z-coordinate contains its depth. If that isn't the case, you'll need to make adjustments.
It might be stupid and a trivial question but since i'm kinda new with OpenGL ,I don't understand how to send color buffer and depth buffer coming from a single FBO to the fragment shader.
My FBO is generated with 1 texture buffer and 1 depth buffer
glGenTextures(1, &texture_color);
glBindTexture(GL_TEXTURE_2D, texture_color);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glGenTextures(1, &texture_depth);
glBindTexture(GL_TEXTURE_2D, texture_depth);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, width, height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0);
and I would like, in my fragment buffer to have
uniform sampler2D colorTexture;
uniform sampler2D depthTexture;
to be filled with both color and depth texture...
I can send just one or the other with GlBindtexture (and it works well), but I don't success to send the 2 at the same time. Does somebody knows how to do it?
Thanks!
Ok, I might not be giant since the solution is easy, and I was just doing wrong.
I was trying to use glActiveTexture(GL_DEPTH) followed by a binding to my Texture : (glBindTexture(GL_TEXTURE_2D, visibilityFBO.getDepthTexture());) thinking it was normal to use GL_DEPTH.
But i was wrong,
and it works using :
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, visibilityFBO.getColorTexture());
glUniform1i(glGetUniformLocation(visibilityPassShader.Program, "positionTexture"), 0);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, visibilityFBO.getDepthTexture());
glUniform1i(glGetUniformLocation(visibilityPassShader.Program, "depthTexture"), 1);
it's the same as before, but the DEPTH is considerated as a Texture since I used glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, width, height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); to produce my Depth texture. So I just have to call the next texture and bind it to be able to use it.
In the fragment, it is still :
uniform sampler2D positionTexture;
uniform sampler2D depthTexture;
In the good order.
Using OpenGL >= 4, you can use in the fragment shader
layout (binding = 0) uniform sampler2D positionTexture;
layout (binding = 1) uniform sampler2D depthTexture;
and forgive the part about glUniform1i in the source code.
Hope it can help someone!
I have a simple RGBA texture that I will project on a rectangle/quad in OpenGL.
However, I want to do some operations in the rgb pixels of that texture, e.g., I want the displayed color to be some function of the RGB pixels of the original image.
My questions are: can I apply a fragment shader to a 2D texture? And if I can, How do I access the rgb value of the original texture image?
Any help would be appreciated.
You can certainly do this. Render the quad as usual and send the texture to the fragment shader as a sampler2D uniform.
Inside the fragment shader, you can then make calls to either texture or texelFetch. texture samples your texture, and texelFetch looks up an RGB texel from the texture without performing interpolation, etc.
Here is an example of what your fragment shader might look like:
Fragment shader
#version 330 core
uniform vec2 resolution;
uniform sampler2D myTexture;
out vec3 color;
void main()
{
vec2 pos = gl_FragCoord.xy / resolution.xy;
pos.y = resolution.y - pos.y;
color = texture(myTexture, pos).xyz;
}
And then on the application side:
Initialization
glGenTextures(1, &m_textureID);
glBindTexture(GL_TEXTURE_2D, m_textureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, m_imageWidth, m_imageHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, pImage);
m_textureUniformID = glGetUniformLocation(m_programID, "myTexture");
Render loop
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, m_textureID);
glUniform1i(m_textureUniformID, 0);