Swapping between different framebuffer - opengl

I am attempting to create a scenario where I render two completely different textures and swap between them, to represent different states in a game. Is it possible to render textures to framebuffer A, and then completely different textures to framebuffer B. Is then possible to switch back and forth between these frames, so for instance if framebuffer A is being rendered onto the screen, then the contents of framebuffer B are stored in memory until they are selected. Give only helpful answers.

Although this particular scenario may be handled in various way depending on what you exactly need to render on them, swapping between framebuffer textures is not uncommon practice and it can serve well the purpose of making e.g. gaussian blur post process effects with the use of ping-pong framebuffers as explained in the following article: https://learnopengl.com/#!Advanced-Lighting/Bloom
One of the possible solution would then require the creation of two offscreen framebuffers to be later displayed on the main framebuffer
//this renderbuffer provides DEPTH ONLY - change to include stencilBuffer
function createFramebufferA() {
var FBO_A = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, FBO_A);
var FBO_A_texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, FBO_A_texture);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, screen.width, screen.height, 0, gl.RGBA, gl.UNSIGNED_BYTE, null);
var renderbuffer = gl.createRenderbuffer();
gl.bindRenderbuffer(gl.RENDERBUFFER, renderbuffer);
gl.renderbufferStorage(gl.RENDERBUFFER, gl.DEPTH_COMPONENT16, screen.width, screen.height);
gl.framebufferRenderbuffer(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.RENDERBUFFER, renderbuffer);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, FBO_A_texture, 0);
gl.bindTexture(gl.TEXTURE_2D, null);
gl.bindRenderbuffer(gl.RENDERBUFFER, null);
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
return FBO_A;
}
Later on after you've finished rendering on those framebuffers you could setup a quick shader program bound to the main framebuffer to display a quad that fills the entire screen and outputs the content of those framebuffer's texture
gl.bindFramebuffer(gl.FRAMEBUFFER, null); //we're now drawing inside main framebuffer
gl.useProgram(PostProcessProgram); //this program displays a quad that fills the entire screen and takes a texture as uniform
/* ... after binding the VBO and attribute pointers ... */
// this condition decides whether we're going to display the content
// of framebuffer A or framebuffer B
if(certain_condition)
gl.bindTexture(gl.TEXTURE_2D, FBO_A_texture);
else
gl.bindTexture(gl.TEXTURE_2D, FBO_B_texture);
gl.activeTexture(gl.TEXTURE0);
gl.uniform1i(PostProcessProgram.texture, 0);
//drawing one of either framebuffer's texture
gl.drawArrays(gl.TRIANGLES, 0, 6);
If you're worried about performance, keep in mind that using multiple framebuffers is not uncommon and at times is mandatory to achieve certain post process effects or tecniques such as deferred shading.

Related

How do I fix the jaggedness and darkness around the edges of 2D textures in WebGL?

I've tried setting the antialias property on the WebGL context to true, but that didn't fix it.
This is what I'm getting in WebGL:
This is canvas rendering, via drawImage, which is what I'm trying to replicate:
I'm using the default WebGL settings, aside from these three changed flags:
gl.enable(gl.BLEND); // Enable blending
gl.depthFunc(gl.LEQUAL); //near things obscure far things
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
And here's how I load the sprites (with the sprite variable being an Image object)
const texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, sprite);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
Alright, fixed it. It was happening because my textures used premultiplied alpha values, which messed up the blending.
I fixed it by changing my blendFunc from gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA) to gl.blendFunc(gl.ONE, gl.ONE_MINUS_SRC_ALPHA)
I also had to tell WebGL to unpack premultiplied alpha values by doing
gl.pixelStorei(gl.UNPACK_PREMULTIPLY_ALPHA_WEBGL, true)

OpenGL ping pong works with one pass, not with two

This might be a more basic OpenGL mistake than the title suggests.
I am doing segmentation using fragment shaders in OpenGL, which require multiple rendering passes to do successive operations (eg. gaussian blur + edge detection + segmentation).
As far as I understood, there is this common technique called ping pong which takes two frame buffers (FBO) and simply renders to one FBO using the other as input.
The thing is, one pass--shader_0 outputting stuff to FBO_1 using FBO_0 as input--works, but when I try to use shader_1 with FBO_0 as input and render into FBO_1, I get a completely transparent image.
I checked both shaders and they do work individually, yet together they produce this transparent output.
Here is the set of calls I do for each pass, with segmentationBuffers containing the two FBOs, respectively used as input and output for this pass:
glBindFramebuffer(
GL_FRAMEBUFFER,
segmentationBuffers[lastSegmentationFboRenderedTo]->FramebufferName
);
glViewport(0, 0, windowWidth, windowHeight);
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
currentStepShader->UseProgram();
glClearColor(0, 0, 0, 0.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Enable blending
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
lastSegmentationFboRenderedTo = (lastSegmentationFboRenderedTo + 1) % 2;
glActiveTexture(GL_TEXTURE0);
glBindTexture(
GL_TEXTURE_2D,
segmentationBuffers[lastSegmentationFboRenderedTo]->renderedTexture
);
glUniform1i(glGetUniformLocation(shader->shaderPtr, "inputTexture"), 0);
glUniform2fv(
glGetUniformLocation(shader->shaderPtr, "texCoordOffsets"),
25,
texCoordOffsets
);
quad->Draw(GL_TRIANGLES, shader,
orthographicProjection,
glm::mat4(1.0f),
getOverlayModelMatrix()
);
And as stated above, doing one pass yields correct intermediary results, but doing two in a row gives a transparent frame. I suspect this is a more basic OpenGL mistake than it seems, but any help is appreciated!
I solved the issue by removing the call to glEnable(GL_DEPTH_TEST);.
I suspect that by enabling depth testing, OpenGL was discarding fragments from subsequent computation steps since they had the same depth value.

OpenGL glDrawBuffer and glBindTexture

I'm still new on opengl3 and I'm trying to create a multipass rendering.
In order to do that, I created FBO, generated several textures and attached them to it with
unsigned index_col = 0;
for (index_col = 0; index_col < nbr_textures; ++index_col)
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + index_col, texture_colors[index_col], 0);
It works well (I try to believe that I'm doing good here!).
My comprehension problem occurs after, when I try to render in the first texture offscreen, then in the second texture, then render on the screen.
To render to a particular texture, I'm using :
FBO.bind();
glDrawBuffer(GL_COLOR_ATTACHMENT0);
glBindTexture(GL_TEXTURE_2D, FBO.getColorTextures(0)); //getColorTextures(0) is texture_colors[0]
Then I draw, using my shader, and after I would like to do :
glDrawBuffer(GL_COLOR_ATTACHMENT1);
glBindTexture(GL_TEXTURE_2D, FBO.getColorTextures(1));
and after all
glBindFramebuffer(GL_FRAMEBUFFER, 0);
RenderToScreen(); // assuming this function render to screen with a quad
My question is :
What is the difference between glDrawBuffer and glBindTexture? Is it necessary to call both? Aren't textures attached to buffer? (I can't test it actually, because I'm trying to make it works...)
Thanks!
glBindTexture is connecting a texture with a texture sampler unit for reading. glDrawBuffer selects the destination for drawing writes. If you want to select a texture as rendering target use glDrawBuffer on the color attachment the texture is attached to; and make sure that none of the texture sampler units it is currently bound to is used as a shader input! The result of creating a feedback loop is undefined.
glDrawBuffer selects the color buffer (in this case of the framebuffer object) that you will write to:
When colors are written to the frame buffer, they are written into the color buffers specified by glDrawBuffer
If you wanted to draw to multiple color buffers you would have written
GLuint attachments[2] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1 };
glDrawBuffers(2, attachments);
while glBindTexture binds a texture to a texture unit.
They serve different purposes - remember that OpenGL and its current rendering context behave as a state machine.

Draw the contents of the render buffer Object

Do not quite understand the operation render buffer object. For example if I want to show what is in the render buffer, I must necessarily do the render to texture?
GLuint fbo,color_rbo,depth_rbo;
glGenFramebuffers(1,&fbo);
glBindFramebuffer(GL_FRAMEBUFFER,fbo);
glGenRenderbuffersEXT(1, &color_rb);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, color_rb);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_RGBA8, 256, 256);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT,GL_RENDERBUFFER_EXT, color_rb);
glGenRenderbuffersEXT(1, &depth_rb);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, depth_rb);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT24, 256, 256);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT,GL_RENDERBUFFER_EXT, depth_rb);
if(glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT)!=GL_FRAMEBUFFER_COMPLETE_EXT)return 1;
glBindFramebuffer(GL_FRAMEBUFFER,0);
//main loop
//This does not work :-(
glBindFramebuffer(GL_FRAMEBUFFER,fbo);
glClearColor(0.0,0.0,0.0,1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
drawCube();
glBindFramebuffer(GL_FRAMEBUFFER,0);
any idea?
You are not going to see anything when you draw to an FBO instead of the default framebuffer, that is part of the point of FBOs.
Your options are:
Blit the renderbuffer into another framebuffer (in this case it would probably be GL_BACK for the default backbuffer)
Draw into a texture attachment and then draw texture-mapped primitives (e.g. triangles / quad) if you want to see the results.
Since 2 is pretty self-explanatory, I will explain option 1 in greater detail:
/* We are going to blit into the window (default framebuffer) */
glBindFramebuffer (GL_DRAW_FRAMEBUFFER, 0);
glDrawBuffer (GL_BACK); /* Use backbuffer as color dst. */
/* Read from your FBO */
glBindFramebuffer (GL_READ_FRAMEBUFFER, fbo);
glReadBuffer (GL_COLOR_ATTACHMENT0); /* Use Color Attachment 0 as color src. */
/* Copy the color and depth buffer from your FBO to the default framebuffer */
glBlitFramebuffer (0,0, width,height,
0,0, width,height,
GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT,
GL_NEAREST);
There are a couple of things worth mentioning here:
First, blitting from one framebuffer to another is often measurably slower than drawing two textured triangles that fill the entire viewport. Second, you cannot use linear filtering when you blit a depth or stencil image... but you can if you take the texture mapping approach (this only truly matters if the resolution of your source and destination buffers differ when blitting).
Overall, drawing a textured primitive is the more flexible solution. Blitting is most useful if you need to do Multisample Anti-Aliasing, because you would have to implement that in a shader otherwise and multisample texturing was added after Framebuffer Objects; some older hardware/drivers support FBOs but not multisample color (requires DX10 hardware) or depth (requires DX10.1 hardware) textures.

OpenGL glGeneratemipmap and Framebuffers

I'm wrapping my head around generating mipmaps on the fly, and reading this bit with this code: http://www.g-truc.net/post-0256.html
//Create the mipmapped texture
glGenTextures(1, &ColorbufferName);
glBindTexture(ColorbufferName);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 512, 512, 0, GL_UNSIGNED_BYTE, NULL);
glGenerateMipmap(GL_TEXTURE_2D); // /!\ Allocate the mipmaps /!\
...
//Create the framebuffer object and attach the mipmapped texture
glBindFramebuffer(GL_FRAMEBUFFER, FramebufferName);
glFramebufferTexture2D(
GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, ColorbufferName, 0);
...
//Commands to actually draw something
render();
...
//Generate the mipmaps of ColorbufferName
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, ColorbufferName);
glGenerateMipmap(GL_TEXTURE_2D);
My questions:
Why does glGenerateMipmap needs to be called twice in the case of render to texture?
Does it have to be called like this every frame?
If I for example import a diffuse 2d texture I only need to call it once after I load it into OpenGL like this:
GLCALL(glGenTextures(1, &mTexture));
GLCALL(glBindTexture(GL_TEXTURE_2D, mTexture));
GLint format = (colorFormat == ColorFormat::COLOR_FORMAT_RGB ? GL_RGB : colorFormat == ColorFormat::COLOR_FORMAT_RGBA ? GL_RGBA : GL_RED);
GLCALL(glTexImage2D(GL_TEXTURE_2D, 0, format, textureWidth, textureHeight, 0, format, GL_UNSIGNED_BYTE, &textureData[0]));
GLCALL(glGenerateMipmap(GL_TEXTURE_2D));
GLCALL(glBindTexture(GL_TEXTURE_2D, 0));
I suspect it is because the textures are redrawn every frame and the mipmap generation uses its content in the process but I want confirmation of this.
3 - Also, if I render to my gbuffer and then immediately glBlitFramebuffer it to the default FBO, do I need to bind and glGenerateMipmap like this?
GLCALL(glBindTexture(GL_TEXTURE_2D, mGBufferTextures[GBuffer::GBUFFER_TEXTURE_DIFFUSE]));
GLCALL(glGenerateMipmap(GL_TEXTURE_2D));
GLCALL(glReadBuffer(GL_COLOR_ATTACHMENT0 + GBuffer::GBUFFER_TEXTURE_DIFFUSE));
GLCALL(glBlitFramebuffer(0, 0, mWindowWidth, mWindowHeight, 0, 0, mWindowWidth, mWindowHeight, GL_COLOR_BUFFER_BIT, GL_LINEAR));
As explained in the post you link to, "[glGenerateMipmap] does actually two things which is maybe the only issue with it: It allocates the mipmaps memory and generate the mipmaps."
Notice that what precedes the first glGenerateMipmap call is a glTexImage2D call with a NULL data pointer. Those two calls combined will simply allocate the memory for all of the texture's levels. The data they contain at this point is garbage.
Once you have an image loaded into the texture's first level, you will have to call glGenerateMipmap a second time to actually fill the smaller levels with downsampled images.
Your guess is right, glGenerateMipmap is called every frame because the image rendered to the texture's first level changes every frame (since it is being rendered to). If you don't call the function, then the smaller mipmaps will never be modified (if you were to map such a texture, you would see your uninitialized smaller mipmap levels when far enough away).
No. Mipmaps are only needed if you intend to map the texture to triangles with a texture filtering mode that uses mipmaps. If you're only dealing with the first level of the texture, you don't need to generate the mipmaps. In fact, if you never map the texture, you can use a renderbuffer instead of a texture in your framebuffer.