Does a frame buffer attached to a texture have to be bound to a texture unit? - opengl

I'm trying to render to a frame buffer. I did this:
renderer.gl_context.make_current();
renderer.gl.GenFramebuffers(1, &mut fbo);
renderer.gl.BindFramebuffer(gl::FRAMEBUFFER, fbo);
renderer.gl.GenTextures(1, &mut tex);
renderer.gl.BindTexture(gl::TEXTURE_2D, tex);
renderer.gl.TexImage2D(
gl::TEXTURE_2D,
0,
gl::RGB as i32,
width as i32,
height as i32,
0,
gl::RGB,
gl::UNSIGNED_BYTE,
std::ptr::null() as *const libc::c_void,
);
renderer.gl.TexParameteri(
gl::TEXTURE_2D,
gl::TEXTURE_MIN_FILTER,
gl::LINEAR.try_into().unwrap(),
);
renderer.gl.TexParameteri(
gl::TEXTURE_2D,
gl::TEXTURE_MAG_FILTER,
gl::LINEAR.try_into().unwrap(),
);
renderer.gl.FramebufferTexture2D(
gl::FRAMEBUFFER,
gl::COLOR_ATTACHMENT0,
gl::TEXTURE_2D,
tex,
0,
);
but the drawing code makes use of other textures as well, and they are bound to GL_TEXTURE0 to GL_TEXTURE_2. I'm always calling renderer.gl.BindFramebuffer(gl::FRAMEBUFFER, fbo) before drawing, but the drawing calls use GL_TEXTURE0 to GL_TEXTURE_2, so where is the tex texture bound? Should I bind it to a texture unit before drawing? Should this texture unit be different from GL_TEXTURE0 to GL_TEXTURE_2?

You don't have to bind the framebuffer texture to a texture unit while render to the FBO. The texture is associate with the framebuffer object, and when you bind the FBO before rendering, everything will be setup correctly.
In your code, you only need the binding for TexParameteri and TexImage2D, but even FramebufferTexture2D does not need the texture to be bound.
As a rule of thumb: If a method takes the handle to an object, then it doesn't need the object to be bound.

Related

OpenGL - blend two textures on the same object

I want to apply two textures on the same object (actually just a 2D rectangle) in order to blend them. I thought I would achieve that by simply calling glDrawElements with the first texture, then binding the other texture and calling glDrawElements a second time. Like this:
//create vertex buffer, frame buffer, depth buffer, texture sampler, build and bind model
//...
glEnable(GL_BLEND);
glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ZERO);
glBlendEquation(GL_FUNC_ADD);
// Clear the screen
glClearColor(0, 0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Bind our texture in Texture Unit 0
GLuint textureID;
//create or load texture
//...
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureID);
// Set our sampler to use Texture Unit 0
glUniform1i(textureSampler, 0);
// Draw the triangles !
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, (void*)0);
//second draw call
GLuint textureID2;
//create or load texture
//...
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureID2);
// Set our sampler to use Texture Unit 0
glUniform1i(textureSampler, 0);
// Draw the triangles !
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, (void*)0);
Unfortunately, the 2nd texture is not drawn at all and I only see the first texture. If I call glClear between the two draw calls, it correctly draws the 2nd texture.
Any pointers? How can I force OpenGL to draw on the second call?
As an alternative to the approach you followed so far I would like to suggest using two texture samplers within your GLSL shader and perform the blending there. This way, you would be done with just one draw call, thus reducing CPU/GPU interaction. To do so, just define to texture samplers in your shader like
layout(binding = 0) uniform sampler2D texture_0;
layout(binding = 1) uniform sampler2D texture_1;
Alternatively, you can use a sampler array:
layout(binding = 0) uniform sampler2DArray textures;
In your application, setup the textures and samplers using
enum Sampler_Unit{BASE_COLOR_S = GL_TEXTURE0 + 0, NORMAL_S = GL_TEXTURE0 + 2};
glActiveTexture(Sampler_Unit::BASE_COLOR_S);
glBindTexture(GL_TEXTURE_2D, textureBuffer1);
glTexStorage2D( ....)
glActiveTexture(Sampler_Unit::NORMAL_S);
glBindTexture(GL_TEXTURE_2D, textureBuffer2);
glTexStorage2D( ....)
Thanks to #tkausl for the tip.
I had depth testing enabled during the initialization phase.
// Enable depth test
glEnable(GL_DEPTH_TEST);
// Accept fragment if it closer to the camera than the former one
glDepthFunc(GL_LESS);
The option needs to be disabled in my case, for the blend operation to work.
//make sure to disable depth test
glDisable(GL_DEPTH_TEST);

How to draw to offscreen color buffer in OpenGL and then draw the result to a sprite surface?

Here is a description of the problem:
I want to render some VBO shapes (rectangles, circles, etc) to an off screen framebuffer object. This could be any arbitrary shape.
Then I want to draw the result on a simple sprite surface as a texture, but not on the entire screen itself.
I can't seem to get this to work correctly.
When I run the code, I see the shapes being drawn all over the screen, but not in the sprite in the middle. It remains blank. Even though it seems like I set up the FBO with 1 color texture, it still only renders to screen even if I select the FBO object into context.
What I want to achieve is these shapes being drawn to an off screen texture (using an FBO, obviously) and then render it on the surface of a sprite (or a cube, or we) drawn somewhere on the screen. Yet, whatever I draw, appears to be drawn in the screen itself.
The tex(tex_object_ID); function is just a short-hand wrapper for OpenGL's standard texture bind. It selects a texture into current rendering context.
No matter what I try I get this result: The sprite is blank, but all these shapes should appear there, not on the main screen. (Didn't I bind rendering to FBO? Why is it still rendering on screen?)
I think it is just a logistics of setting up FBO in the right order that I am missing. Can anyone tell what's wrong with my code?
Not sure why the background is red, as I clear it after I select the FBO. It is the sprite that should get the red background & shapes drawn on it.
/*-- Initialization -- */
GLuint texture = 0;
GLuint Framebuffer = 0;
GLuint GenerateFrameBuffer(int dimension)
{
glEnable(GL_TEXTURE_2D);
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, dimension, dimension, 0, GL_RGBA, GL_UNSIGNED_BYTE, nullptr);
glGenFramebuffers(1, &Framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, Framebuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texture, 0);
glDrawBuffer(GL_COLOR);
glReadBuffer(GL_COLOR);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
console_log("GL_FRAMEBUFFER != GL_FRAMEBUFFER_COMPLETE\n");
return texture;
}
// Store framebuffer texture (should I store texture here or Framebuffer object?)
GLuint FramebufferHandle = GenerateFrameBuffer( 256 );
Standard OpenGL initialization code follows, memory is allocated, VBO's are created and bound, etc. This works correctly and there aren't errors in initialization. I can render VBOs, polygons, textured polygons, lines, etc, on standard double buffer with success.
Next, in my render loop I do the following:
// Possible problem?
// Should FramebufferHandle be passed here?
// I tried "texture" and "Framebuffer " as well, to no effect:
glBindFramebuffer(GL_FRAMEBUFFER, FramebufferHandle);
// Correct projection, just calculates the view based on current zoom
Projection = setOrthoFrustum(-config.zoomed_width/2, config.zoomed_width/2, -config.zoomed_height/2, config.zoomed_height/2, 0, 100);
View.identity();
Model.identity();
// Mini shader, 100% *guaranteed* to work, there are no errors in it (works normally on the screen)
shaderProgramMini.use();
//Clear frame buffer with blue color
glClearColor(0.0f, 0.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);// | GL_DEPTH_BUFFER_BIT);
// Set yellow to draw different shapes on the framebuffer
color = {1.0f,1.0f,0.0f};
// Draw several shapes (already correctly stored in VBO objects)
Memory.select(VBO_RECTANGLES); // updates uniforms
glDrawArrays(GL_QUADS, 0, Memory.renderable[VBO_RECTANGLES].indexIndex);
Memory.select(VBO_CIRCLES); // updates uniforms
glDrawArrays(GL_LINES, 0, Memory.renderable[VBO_CIRCLES].indexIndex);
Memory.select(VBO_2D_LIGHT); // updates uniforms
glDrawArrays(GL_LINES, 0, Memory.renderable[VBO_2D_LIGHT].indexIndex);
// Done writing to framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, 0);
// Correct projection, just calculates the view based on current zoom
Projection = setOrthoFrustum(-config.zoomed_width/2, config.zoomed_width/2, -config.zoomed_height/2, config.zoomed_height/2, 0, 100);
View.identity();
Model.identity();
Model.scale(10.0);
// Select texture shader to draw what was drawn on offscreen Framebuffer / texture
// Standard texture shader, 100% *guaranteed* to work, there are no errors in it (works normally on the screen)
shaderProgramTexture.use();
// This is a wrapper for bind texture to ID, just shorthand function name
tex(texture); // FramebufferHandle; // ? // maybe the mistake in binding to the wrong target object?
color = {0.5f,0.2f,0.0f};
Memory.select(VBO_SPRITE); Select a square VBO for rendering sprites (works if any other texture is assigned to it)
// finally draw the sprite with Framebuffer's texture:
glDrawArrays(GL_TRIANGLES, 0, Memory.renderable[VBO_SPRITE].indexIndex);
I may have gotten the order of something completely wrong. Or FramebufferHandle/Framebuffer/texture object is not passed to something correctly. But I spent all day, and hope someone more experienced than me can see the mistake.
GL_COLOR is not an accepted value for glDrawBuffer
See OpenGL 4.6 API Compatibility Profile Specification, 17.4.1 Selecting Buffers for Writing, Table 17.4 and Table 17.5, page 628
NONE, FRONT_LEFT, FRONT_RIGHT, BACK_LEFT, BACK_RIGHT, FRONT, BACK, LEFT, RIGHT, FRONT_AND_BACK, AUXi.
Arguments to DrawBuffer when the context is bound to a default framebuffer, and the buffers they indicate. The same arguments are valid for ReadBuffer, but only a single buffer is selected as discussed in section.
COLOR_ATTACHMENTi
Arguments to DrawBuffer(s) and ReadBuffer when the context is bound to a framebuffer object, and the buffers they indicate. i in COLOR_ATTACHMENTi may range from zero to the value of MAX_COLOR_ATTACHMENTS minus one.
Thsi means that glDrawBuffer(GL_COLOR); and glReadBuffer(GL_COLOR); will generate a GL_INVALID_ENUM error.
Try to use COLOR_ATTACHMENT0 instead.
Furthermore, glCheckFramebufferStatus(GL_FRAMEBUFFER), checkes the completeness of the framebuffer object which is bound to the target.
This means that
glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE
has to be done before
glBindFramebuffer(GL_FRAMEBUFFER, 0);
Or you have to use:
glNamedFramebufferReadBuffer(Framebuffer, GL_FRAMEBUFFER);

Copy OpenGL texture from one target to another

I have a IOSurface backed texture which is limited to GL_TEXTURE_RECTANGLE_ARB and doesn't support mipmapping. I'm trying to copy this texture to another texture bound to GL_TEXTURE_2D and then perform mipmapping on that one instead. But I'm having problems copying my texture. I can't even get it to work by just copying it to another GL_TEXTURE_RECTANGLE_ARB. Here is my code:
var arbTexture = GLuint()
glGenTextures(1, &arbTexture)
/* Do some stuff to fill arbTexture with image data */
glEnable(GLenum(GL_TEXTURE_RECTANGLE_ARB))
glBindTexture(GLenum(GL_TEXTURE_RECTANGLE_ARB), arbTexture)
// At this point, if I return here, my arbTexture draws just fine
// Trying to copy to another texture (fbo and texture generated previously):
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), fbo);
glFramebufferTexture2D(GLenum(GL_READ_FRAMEBUFFER), GLenum(GL_COLOR_ATTACHMENT0), GLenum(GL_TEXTURE_RECTANGLE_ARB), arbTexture, 0)
glFramebufferTexture2D(GLenum(GL_DRAW_FRAMEBUFFER), GLenum(GL_COLOR_ATTACHMENT1), GLenum(GL_TEXTURE_RECTANGLE_ARB), texture, 0)
glDrawBuffer(GLenum(GL_COLOR_ATTACHMENT1))
glBlitFramebuffer(0, 0, GLsizei(width), GLsizei(height), 0, 0, GLsizei(width), GLsizei(height), GLbitfield(GL_COLOR_BUFFER_BIT)
, GLenum(GL_NEAREST))
glBindTexture(GLenum(GL_TEXTURE_RECTANGLE_ARB), texture)
// At this point, the texture is all black
The arguments of your second glFramebufferTexture2D() do not match your description:
glFramebufferTexture2D(
GLenum(GL_DRAW_FRAMEBUFFER), GLenum(GL_COLOR_ATTACHMENT1),
GLenum(GL_TEXTURE_RECTANGLE_ARB), texture, 0)
Since you're saying that the second texture is a GL_TEXTURE_2D, this needs to be matched by the textarget argument of the call. It should be:
glFramebufferTexture2D(
GLenum(GL_DRAW_FRAMEBUFFER), GLenum(GL_COLOR_ATTACHMENT1),
GLenum(GL_TEXTURE_2D), texture, 0)
BTW, GL_TEXTURE_RECTANGLE is standard in OpenGL 3.1 and later, so there should be no need to use the ARB form.

OPENGL Texture2D manipulation directly/indirectly

Following this tutorial, I am performing shadow mapping on a 3D scene. Now I want
to manipulate the raw texel data of shadowMapTexture (see the excerpt below) before
applying this using ARB extensions
//Textures
GLuint shadowMapTexture;
...
...
**CopyTexSubImage2D** is used to copy the contents of the frame buffer into a
texture. First we bind the shadow map texture, then copy the viewport into the
texture. Since we have bound a **DEPTH_COMPONENT** texture, the data read will
automatically come from the depth buffer.
//Read the depth buffer into the shadow map texture
glBindTexture(GL_TEXTURE_2D, shadowMapTexture);
glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, shadowMapSize, shadowMapSize);
N.B. I am using OpenGL 2.1 only.
Tu can do it in 2 ways:
float* texels = ...;
glBindTexture(GL_TEXTURE_2D, shadowMapTexture);
glTexSubImage2D(GL_TEXTURE_2D, 0, x,y,w,h, GL_DEPTH_COMPONENT, GL_FLOAT, texels);
or
Attach your shadowMapTexture to (write) framebuffer and call:
float* pixels = ...;
glRasterPos2i(x,y)
glDrawPixels(w,h, GL_DEPTH_COMPONENT, GL_FLOAT, pixels);
Don't forget to disable depth_test first in above method.

glReadPixels from FBO fails with multisampling

I have an FBO object with a color and depth attachment which I render to and then read from using glReadPixels() and I'm trying to add to it multisampling support.
Instead of glRenderbufferStorage() I'm calling glRenderbufferStorageMultisampleEXT() for both the color attachment and the depth attachment. The frame buffer object seem to have been created successfully and is reported as complete.
After rendering I'm trying to read from it with glReadPixels(). When the number of samples is 0 i.e. multisampling disables it works perfectly and I get the image I want. when I set the number of samples to something else, say 4, the frame buffer is still constructed OK but glReadPixels() fails with an INVALID_OPERATION
Anyone have an idea what could be wrong here?
EDIT: The code of glReadPixels:
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, ptr);
where ptr points to an array of width*height uints.
I don't think you can read from a multisampled FBO with glReadPixels(). You need to blit from the multisampled FBO to a normal FBO, bind the normal FBO, and then read the pixels from the normal FBO.
Something like this:
// Bind the multisampled FBO for reading
glBindFramebufferEXT(GL_READ_FRAMEBUFFER_EXT, my_multisample_fbo);
// Bind the normal FBO for drawing
glBindFramebufferEXT(GL_DRAW_FRAMEBUFFER_EXT, my_fbo);
// Blit the multisampled FBO to the normal FBO
glBlitFramebufferEXT(0, 0, width, height, 0, 0, width, height, GL_COLOR_BUFFER_BIT, GL_NEAREST);
//Bind the normal FBO for reading
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, my_fbo);
// Read the pixels!
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
You can't read the multisample buffer directly with glReadPixels since it would raise an GL_INVALID_OPERATION error. You need to blit to another surface so that the GPU can do a downsample. You could blit to the backbuffer, but there is the problem of the "pixel owner ship test". It is best to make another FBO. Let's assume you made another FBO and now you want blit. This requires GL_EXT_framebuffer_blit. Typically, when your driver supports GL_EXT_framebuffer_multisample, it also supports GL_EXT_framebuffer_blit, for example the nVidia Geforce 8 series.
//Bind the MS FBO
glBindFramebufferEXT(GL_READ_FRAMEBUFFER_EXT, multisample_fboID);
//Bind the standard FBO
glBindFramebufferEXT(GL_DRAW_FRAMEBUFFER_EXT, fboID);
//Let's say I want to copy the entire surface
//Let's say I only want to copy the color buffer only
//Let's say I don't need the GPU to do filtering since both surfaces have the same dimension
glBlitFramebufferEXT(0, 0, width, height, 0, 0, width, height, GL_COLOR_BUFFER_BIT, GL_NEAREST);
//--------------------
//Bind the standard FBO for reading
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fboID);
glReadPixels(0, 0, width, height, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
Source: GL EXT framebuffer multisample