Framebuffer with multiple draw buffers - opengl

I'm using the framebuffer with multiple color attachments(draw buffers), and writing colors with several shader programs.
Each shader programs use different render targets.
e.g. SHADER1 only uses the 1st draw buffer but SHADER2 uses the 2nd and 3rd draw buffer.
But if I only specify the render target in fragment shader like this:
// SHADER1.frag
layout(location = 0) out vec4 color;
void main() { color = vec4(1.0); }
This results in writing colors at every draw buffers, so I have to clear the 2nd and 3rd draw buffer.
Is this the default behavior?
And should I update glDrawbuffers state for each shader programs like this?
glUseProgram(SHADER1);
GLenum drawBuffers1[] = { GL_COLOR_ATTACHMENT0, GL_NONE, GL_NONE};
glDrawBuffers(3, drawBuffers1);
...
glUseProgram(SHADER2);
GLenum drawBuffers2[] = { GL_NONE, GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2};
glDrawBuffers(3, drawBuffers2);
...

The state set with glDrawBuffers() is part of the framebuffer object (FBO) state. So as long as you use the same FBO for all your rendering, you'll have to call glDrawBuffers() every time you want to draw to different buffers.
Based on what you describe, I think it would be much easier for you to use multiple FBOs. Using similar pseudo-notation as the one you use in your question, you could make these calls once during setup:
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, FBO1);
glFramebufferTexture2D(..., GL_COLOR_ATTACHMENT0, ..., BUFFER1, ...);
GLenum drawBuffers1[] = {GL_COLOR_ATTACHMENT0};
glDrawBuffers(1, drawBuffers1);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, FBO2);
glFramebufferTexture2D(..., GL_COLOR_ATTACHMENT0, ..., BUFFER2, ...);
glFramebufferTexture2D(..., GL_COLOR_ATTACHMENT1, ..., BUFFER3, ...);
GLenum drawBuffers2[] = {GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1};
glDrawBuffers(2, drawBuffers2);
Then every time you render:
glUseProgram(SHADER1);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, FBO1);
...
glUseProgram(SHADER2);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, FBO2);
...
To make this work, your SHADER1 will have one output (with location 0), SHADER2 has two outputs (with locations 0 and 1).
If for some reason you wanted to stick to one FBO, and make your approach work, you'll have to be careful to get the result you want:
GLenum drawBuffers2[] = { GL_NONE, GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2};
glDrawBuffers(3, drawBuffers2);
With these settings, and using a shader with two outputs, you'll have to set the locations of the outputs to 1 and 2 (using glBindFragDataLocation(), or location directives in the shader code).
You could also use this instead:
GLenum drawBuffers2[] = {GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2};
glDrawBuffers(2, drawBuffers2);
with locations 0 and 1 for the fragment shader outputs. One downside of this is that it will not work in case you ever want to port your code to OpenGL ES. In ES, the ith buffer can only be GL_NONE or GL_COLOR_ATTACHMENTi.

Even if your fragment shader does not output to a specific output location, that fragment still technically has outputs for those locations. Unwritten locations simply have an undefined value.
So if your draw buffer setting says "take fragment output location 1 and write it to color attachment 1", then that is what it will do. Always. Until you change it.
You could use write masking to turn writes off for specific color buffers. But really, it'd probably be better to set glDrawBuffers appropriately.

Related

OpenGL - blend two textures on the same object

I want to apply two textures on the same object (actually just a 2D rectangle) in order to blend them. I thought I would achieve that by simply calling glDrawElements with the first texture, then binding the other texture and calling glDrawElements a second time. Like this:
//create vertex buffer, frame buffer, depth buffer, texture sampler, build and bind model
//...
glEnable(GL_BLEND);
glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ZERO);
glBlendEquation(GL_FUNC_ADD);
// Clear the screen
glClearColor(0, 0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Bind our texture in Texture Unit 0
GLuint textureID;
//create or load texture
//...
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureID);
// Set our sampler to use Texture Unit 0
glUniform1i(textureSampler, 0);
// Draw the triangles !
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, (void*)0);
//second draw call
GLuint textureID2;
//create or load texture
//...
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureID2);
// Set our sampler to use Texture Unit 0
glUniform1i(textureSampler, 0);
// Draw the triangles !
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, (void*)0);
Unfortunately, the 2nd texture is not drawn at all and I only see the first texture. If I call glClear between the two draw calls, it correctly draws the 2nd texture.
Any pointers? How can I force OpenGL to draw on the second call?
As an alternative to the approach you followed so far I would like to suggest using two texture samplers within your GLSL shader and perform the blending there. This way, you would be done with just one draw call, thus reducing CPU/GPU interaction. To do so, just define to texture samplers in your shader like
layout(binding = 0) uniform sampler2D texture_0;
layout(binding = 1) uniform sampler2D texture_1;
Alternatively, you can use a sampler array:
layout(binding = 0) uniform sampler2DArray textures;
In your application, setup the textures and samplers using
enum Sampler_Unit{BASE_COLOR_S = GL_TEXTURE0 + 0, NORMAL_S = GL_TEXTURE0 + 2};
glActiveTexture(Sampler_Unit::BASE_COLOR_S);
glBindTexture(GL_TEXTURE_2D, textureBuffer1);
glTexStorage2D( ....)
glActiveTexture(Sampler_Unit::NORMAL_S);
glBindTexture(GL_TEXTURE_2D, textureBuffer2);
glTexStorage2D( ....)
Thanks to #tkausl for the tip.
I had depth testing enabled during the initialization phase.
// Enable depth test
glEnable(GL_DEPTH_TEST);
// Accept fragment if it closer to the camera than the former one
glDepthFunc(GL_LESS);
The option needs to be disabled in my case, for the blend operation to work.
//make sure to disable depth test
glDisable(GL_DEPTH_TEST);

OpenGL Compute Shader - glDispatchCompue() does not run

I'm currently working with a compute shader in OpenGl and my goal is to render from one texture onto another texture with some modifications. However, it does not seem like my compute shader has any effect on the textures at all.
After creating a compute shader I do the following
//Use the compute shader program
(*shaderPtr).useProgram();
//Get the uniform location for a uniform called "sourceTex"
//Then connect it to texture-unit 0
GLuint location = glGetUniformLocation((*shaderPtr).program, "sourceTex");
glUniform1i(location, 0);
//Bind buffers and call compute shader
this->bindAndCompute(bufferA, bufferB);
The bindAndCompute() function looks like this and its purpose is to ready the two buffers to be accessed by the compute shader and then run the compute shader.
bindAndCompute(GLuint sourceBuffer, GLuint targetBuffer){
glBindImageTexture(
0, //Always bind to slot 0
sourceBuffer,
0,
GL_FALSE,
0,
GL_READ_ONLY, //Only read from this texture
GL_RGB16F
);
glBindImageTexture(
1, //Always bind to slot 1
targetBuffer,
0,
GL_FALSE,
0,
GL_WRITE_ONLY, //Only write to this texture
GL_RGB16F
);
//this->height is currently 960
glDispatchCompute(1, this->height, 1); //Call upon shader
glMemoryBarrier(GL_SHADER_IMAGE_ACCESS_BARRIER_BIT);
}
And finally, here is the compute shader. I currently only try to set it so that it makes the second texture completely white.
#version 440
#extension GL_ARB_compute_shader : enable
#extension GL_ARB_shader_image_load_store : enable
layout (rgba16, binding=0) uniform image2D sourceTex; //Textures bound to 0 and 1 resp. that are used to
layout (rgba16, binding=1) uniform image2D targetTex; //acquire texture and save changes made to texture
layout (local_size_x=960 , local_size_y=1 , local_size_z=1) in; //Local work-group size
void main(){
vec4 result; //Vec4 to store the value to be written
pxlPos = ivec2(gl_GlobalInvocationID.xy); //Get pxl-pos
/*
result = imageLoad(sourceTex, pxlPos);
...
*/
imageStore(targetTex, pxlPos, vec4(1.0f)); //Write white to texture
}
Now, when I start bufferB is empty. When I run this I expect bufferB to become completely white. However, after this code bufferB remains empty. My conclusion is that either
A: The compute shader does not write to the texture
B: glDispatchCompute() is not run at all
However, i get no errors and the shader does compile as it should. I have checked that I bind the texture correctly when rendering by binding bufferA which I already know what it contains, then running bindAndCompute(bufferA, bufferA) to turn bufferA white. However, bufferA is unaltered. So, I've not been able to figure out why my compute shader has no effect. If anyone has any ideas on what I can try to do it would be appreciated.
End note: This has been my first question asked on this site. I've tried to present only relevant information but I still feel like maybe it became too much text anyway. If there is feedback on how to improve the structure of the question that is welcome as well.
---------------------------------------------------------------------
EDIT:
The textures I send in with sourceBuffer and targetBuffer is defined as following:
glGenTextures(1, *buffer);
glBindTexture(GL_TEXTURE_2D, *buffer);
glTexImage2D(
GL_TEXTURE_2D,
0,
GL_RGBA16F, //Internal format
this->width,
this->height,
0,
GL_RGBA, //Format read
GL_FLOAT, //Type of values in read format
NULL //source
);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
The image format of the images you bind doesn't match the image format in the shader. You bind a RGB16F (48byte per texel) texture, but state in the shader that it is of rgba16 format (64byte per texel).
Formats have to match according to the rules given here. Assuming that you allocated the texture in OpenGL, this means that the total size of each texel have to match. Also note, that 3-channel textures are (without some rather strange exceptions) not supported by image load/store.
As a side-note: The shader will execute and write if the texture format size matches. But what you write might be garbage because your textures are in 16-bit floating point format (RGBA_16F) while you tell the shader that they are in 16-bit unsigned normalized format (rgba16). Although this doesn't directlyy matter for the compute shader, it does matter if you read-back the texture or access it trough a sampler or write data > 1.0f or < 0.0f into it. If you want 16-bit floats, use rgba16f in the compute shader.

Copy OpenGL texture from one target to another

I have a IOSurface backed texture which is limited to GL_TEXTURE_RECTANGLE_ARB and doesn't support mipmapping. I'm trying to copy this texture to another texture bound to GL_TEXTURE_2D and then perform mipmapping on that one instead. But I'm having problems copying my texture. I can't even get it to work by just copying it to another GL_TEXTURE_RECTANGLE_ARB. Here is my code:
var arbTexture = GLuint()
glGenTextures(1, &arbTexture)
/* Do some stuff to fill arbTexture with image data */
glEnable(GLenum(GL_TEXTURE_RECTANGLE_ARB))
glBindTexture(GLenum(GL_TEXTURE_RECTANGLE_ARB), arbTexture)
// At this point, if I return here, my arbTexture draws just fine
// Trying to copy to another texture (fbo and texture generated previously):
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), fbo);
glFramebufferTexture2D(GLenum(GL_READ_FRAMEBUFFER), GLenum(GL_COLOR_ATTACHMENT0), GLenum(GL_TEXTURE_RECTANGLE_ARB), arbTexture, 0)
glFramebufferTexture2D(GLenum(GL_DRAW_FRAMEBUFFER), GLenum(GL_COLOR_ATTACHMENT1), GLenum(GL_TEXTURE_RECTANGLE_ARB), texture, 0)
glDrawBuffer(GLenum(GL_COLOR_ATTACHMENT1))
glBlitFramebuffer(0, 0, GLsizei(width), GLsizei(height), 0, 0, GLsizei(width), GLsizei(height), GLbitfield(GL_COLOR_BUFFER_BIT)
, GLenum(GL_NEAREST))
glBindTexture(GLenum(GL_TEXTURE_RECTANGLE_ARB), texture)
// At this point, the texture is all black
The arguments of your second glFramebufferTexture2D() do not match your description:
glFramebufferTexture2D(
GLenum(GL_DRAW_FRAMEBUFFER), GLenum(GL_COLOR_ATTACHMENT1),
GLenum(GL_TEXTURE_RECTANGLE_ARB), texture, 0)
Since you're saying that the second texture is a GL_TEXTURE_2D, this needs to be matched by the textarget argument of the call. It should be:
glFramebufferTexture2D(
GLenum(GL_DRAW_FRAMEBUFFER), GLenum(GL_COLOR_ATTACHMENT1),
GLenum(GL_TEXTURE_2D), texture, 0)
BTW, GL_TEXTURE_RECTANGLE is standard in OpenGL 3.1 and later, so there should be no need to use the ARB form.

Flipping texture when copying to another texture

I need to flip my texture vertically when copying it into another texture.I know about 3 simple ways to do it:
1 . Blit from once FBO into another using full screen quad (and flip in frag shader)
2 . Blit using glBlitFrameBuffer.
3 . Using glCopyImageSubData
I need to perform this copy between 2 textures which aren't attached to any FBO so I am trying to avoid first 2 solutions.I am trying the third one.
Doing it like this:
glCopyImageSubData(srcTex ,GL_TEXTURE_2D,0,0,0,0,targetTex,GL_TEXTURE_2D,0,0,width ,0,height,0,1);
It doesn't work.The copy returns garbage.Is this method supposed to be able to flip when reading?Is there an alternative FBO unrelated method(GPU side only)?
Btw:
glCopyTexSubImage2D(GL_TEXTURE_2D,0,0,0,0,height ,width,0 );
Doesn't work too.
Rendering a textured quad to a pbo by drawing the inverted quad would work.
Or you could go with a simple fragment shader doing a imageLoad + imageStore by inverting the y coordinate with 2 bound image buffers.
glBindImageTexture(0, copyFrom, 0, GL_FALSE, 0, GL_READ_ONLY, GL_RGBAUI32);
glBindImageTexture(1, copyTo, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_RGBAUI32);
the shader would look something like:
layout(binding = 0, rbga32ui) uniform uimage2d input_buffer;
layout(binding = 1, rbga32ui) uniform uimage2d output_buffer;
uniform float u_texHeight;
void main(void)
{
vec4 color = imageLoad( input_buffer, ivec2(gl_FragCoord.xy) );
imageStore( output_buffer, ivec2(gl_FragCoord.x,u_texHeight-gl_FragCoord.y-1), color );
}
You'll have to tweak it a little, but I know it works I used it before.
Hope this helps

Using RGB10_A2_UI format in glRenderBufferStorage()

I am using FBO and rendering to a texture. Here's my code :
GLuint FramebufferName = 0;
glGenFramebuffers(1, &FramebufferName);
glBindFramebuffer(GL_FRAMEBUFFER, FramebufferName);
GLuint renderedTexture;
glGenTextures(1, &renderedTexture);
glBindTexture(GL_TEXTURE_2D, renderedTexture);
glTexImage2D(GL_TEXTURE_2D,0,GL_RGB10_A2UI,256,256,0,GL_RGBA_INTEGER,GL_UNSIGNED_INT_10_10_10_2,0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
GLuint color_buffer;
glGenRenderbuffers(1, &color_buffer);
glBindRenderbuffer(GL_RENDERBUFFER, color_buffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGB10_A2UI, 256, 256);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, color_buffer);
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,renderedTexture, 0);
GLenum DrawBuffers[2] = {GL_COLOR_ATTACHMENT0};
glDrawBuffers(1, DrawBuffers);
GLuint textureId;
LoadImage(textureId); // Loads texture data into textureId
Render(); // Renders textureId onto FramebufferName
glBindFramebuffer(GL_FRAMEBUFFER, 0); // Bind default FBO
glBindTexture(GL_TEXTURE_2D, renderedTexture); //Render using renderedTexture
glDrawArrays (GL_TRIANGLE_FAN,0, 4);
The output is incorrect. The image is not rendered correctly. If I use format GL_RGBA instead of GL_RGB10_A2UI everything goes fine. The FBO is GL_FRAMEBUFFER_COMPLETE ,no issues there. Am I doing something wrong here ?
My fragment shader for GL_RGB10_A2UI is :
in vec2 texcoord;
uniform usampler2D basetexture;
out vec4 Color;
void main(void)
{
uvec4 IntColor = texture(basetexture, texcoord);
Color = vec4(IntColor.rgb, 1023.0) / 1023.0;
}
For GL_RGBA I am not doing normalization in shader.
If I use format GL_RGBA instead of GL_RGB10_A2UI everything goes fine.
If that's true, then it means your shader is not writing integers.
We've discussed this before, but you don't really seem to understand something. An integer texture is a very different thing from a floating-point texture or a normalized integer texture.
There is no circumstance where a GL_RGBA8 texture and a GL_RGB10_A2UI texture would both work with the same code. The same shader cannot read from a texture that could be normalized or integral texture. The same shader cannot write to a buffer that could be normalized or integral. The same pixel transfer code cannot write to or read from an image that could be normalized or integral. They are in every way different entities, which require different parameters to access them.
Furthermore, even if a shader could write to either one, what would it be writing? Integer textures take integers; if you attempt to stick a floating-point value on the range [0, 1] into an integer, it will either come out as 0 or 1. And if you try to put an integer in the [0, 1] range, you will get 0 if your integer was zero, and 1 otherwise. So whatever your fragment shader is doing is very confused.
Odds are very good that you really should be using GL_RGB10_A2, despite your belief that you really did mean to use GL_RGB10_A2UI. If you really meant to be writing integers, your shader would be writing integers and not floats, and therefore your shader would not have "worked" with GL_RGBA8.
However, if you really, truly want to use an unsigned integral texture, and you really, truly understand what that means and how it is different from GL_RGB10_A2, then here are the things you have to do:
Any fragment shader that intends to write to an integer texture must write to an integer output variable. And the signed/unsigned state of that output must match the destination image's signed/unsigned format. So for your GL_RGB10_A2UI, an unsigned integer format, you must be writing to a uvec4. You should be writing integers to this value, not floating-point values.
Any shader that intends to read from an integer texture must use an integer sampler uniform. And that sampler uniform must match the signed/unsigned format of the image. So for your GL_RGB10_A2UI, you must
Pixel transfer operations must explicit use the _INTEGER pixel transfer formats.