Draw OpenGL renderbuffer to screen - opengl

I created a Renderbuffer, that's then modified in OpenCL.
//OpenGL
glGenFramebuffers(1, &frameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
glGenRenderbuffers(1, &colorRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA8, 600, 600);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorRenderbuffer);
//OpenCL
renderEngine = new OpenCLProgram("render.cl");
renderEngine->addArgumentGLRBuffer(colorRenderbuffer);
How would I then proceed drawing my OpenCL creation, the buffer to the screen? I could bind it to a texture and draw a quad the size of my window, but I am not that sure, if it is the most efficient way. Also, if there was a better way of drawing to the screen from OpenCL, that would help!

The call you're looking for is glBlitFramebuffer(). To use this, you bind your FBO as the read framebuffer, and the default framebuffer as the draw framebuffer:
glBindFramebuffer(GL_READ_FRAMEBUFFER, srcFbo);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBlitFramebuffer(0, 0, srcWidth, srcHeight, 0, 0, dstWidth, dstHeight,
GL_COLOR_BUFFER_BIT, GL_NEAREST);
Adjust the parameters for your specific use based on the linked man page.
This is preferable over writing your own shader and rendering a screen sized quad. Not only is it simpler, and requires fewer state changes, it can also be more efficient. Knowing that a blit operation needs to be performed gives the implementation a chance to use a more efficient path. For example, where present, it could use a dedicated blit engine that can run asynchronously to the general rendering functionality of the GPU.
Whether you should use a renderbuffer or texture is not as clear cut. Chances are that it won't make much of a difference. Still, I would recommend to use a renderbuffer as long as that's all you need. Because it has more limited functionality, the driver has the option to create a memory allocation that is more optimized for the purpose. Rendering to a renderbuffer can potentially be more efficient than rendering to a texture on some hardware, particularly if your rendering is pixel output limited.

Don't make it a renderbuffer.
OpenGL renderbuffers exist for the sole purpose of being render targets. The only OpenGL operations that read from them are per-sample operations during rendering to the framebuffer, framebuffer blits, and pixel transfer operations.
Use a texture instead. There is no reason you couldn't create a 600x600 GL_RGBA8 2D texture.

Related

Blitting several textures at once with glBlitFramebuffer

I have got a small OpenGL app and I am looking for the optimal way of blitting several texture buffers at once.
Let's say I have got two framebuffers (fbo1, fbo2) that each contain two texture buffers. And I have got a target fbo (fbo3) with four texture buffers. And I want to blit all the textures from fbo1 and fbo2 to fbo3.
Currently I am doing it separately for each texture like,
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo1)
glReadBuffer(GL_COLOR_ATTACHMENT0)
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo3)
glDrawBuffer(GL_COLOR_ATTACHMENT0)
glBlitFramebuffer(0, 0, width, height, 0, 0, ds_width, ds_height, GL_COLOR_BUFFER_BIT, GL_LINEAR)
glBindFramebuffer(GL_READ_FRAMEBUFFER, 0)
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0)
How is it usually done? And is that even doable?
It isn't "usually" done because people generally don't go around copying a bunch of framebuffer images a lot. Indeed, if you are, that strongly suggests that you're probably doing something wrong.
The only way to do it is the way you've done here (though the needless rebinding of the framebuffers can go away): change the read/draw buffers each time and blit.

Multisampling with glBlitFramebuffer

This is my first attempt to do multisampling (for anti-aliasing) with opengl. Basically, I'm drawing a background to the screen (which should not get anti-aliased) and subsequently I'm drawing the vertices that should be anti-aliased.
What I've done so far:
//create the framebuffer:
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
//Generate color buffer:
glGenRenderbuffers(1, &cb);
glBindRenderbuffer(GL_RENDERBUFFER, cb);
glRenderbufferStorageMultisample(GL_RENDERBUFFER, 4, GL_RGBA8, x_size, y_size);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, cb);
//Generate depth buffer:
glGenRenderbuffers(1, &db);
glBindRenderbuffer(GL_RENDERBUFFER, db);
glRenderbufferStorageMultisample(GL_RENDERBUFFER, 4, GL_DEPTH_COMPONENT, x_size, y_size);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, db);
...
glBindFramebuffer(GL_FRAMEBUFFER, 0);
//draw background ... ...
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
//draw things that should get anti-aliased ... ...
//finally:
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBlitFramebuffer(0, 0, x_size, y_size, 0, 0, x_size, y_size, GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT, GL_NEAREST);
The problem is: when I call glBlitFramebuffer(...) the whole background gets black and I only see the anti-aliased vertices.
Any suggestions?
Normally, blending is the most obvious option if you want to render a new image/texture on top of existing rendering while taking transparency in the image into account. Looking at the rendering into the multisampled framebuffer as an image with transparency, that's exactly the situation you have.
In this case, there are a couple of challenges that make the use of blending more difficult than usual. First of all, glBlitFramebuffer() does not apply blending. From the spec:
Blit operations bypass the fragment pipeline. The only fragment operations which affect a blit are the pixel ownership test and the scissor test.
Without multisampling in play, this is fairly easy to overcome. Instead of using glBlitFramebuffer(), you perform the blit by drawing a screen sized textured quad. Since all fragment operations are in play now, you could use blending.
Howerver, the "drawing a textured quad" part gets much trickier since your content is multisampled. A few options come to mind.
Render background to FBO
You could render the background to the multisampled FBO instead of the primary framebuffer. Then you can use glBlitFramebuffer() exactly as you do now.
You may think: "But I don't want my background to be anti-aliased!" That's not really a problem. You simply disable multisampling while drawing the background:
glDisable(GL_MULTISAMPLE);
I think that should give you what you want. And if it does, it's by far the easiest option.
Multisample Textures
OpenGL 3.2 and later support multisample textures. For this, you would use a texture instead of a renderbuffer as the color buffer of your FBO. The texture is allocated with:
glTexImage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE, 4, GL_RGBA8,
xsize, ysize, GL_FALSE);
There are other aspects that I can't all cover here. If you want to explore this option, you can read up on all the details in the spec or other sources. For example, sampling of the texture in the shader code works differently, with a different sampler type, and sampling functions that only allow you to read one sample at a time.
Two-Stage Blitting
You could use a hybrid of glBlitFramebuffer() for resolving the multisample content, and the "manual" blit for blending the content into the default framebuffer:
Create a second FBO where the color attachment is a regular, not multisampled texture.
Use glBlitFramebuffer() to copy from multisampled renderbuffer in first FBO to texture in second FBO.
Set up and enable blending.
Draw a screen sized quad using the texture that was the attachment to the second FBO.
While this seems somewhat awkward, and requires an extra copy which is undesirable for performance, it is fairly straightforward.
Render the background last
For this, you do exactly what you're doing now, copying the multisampled FBO content to the default framebuffer with glBlitFramebuffer(). But you do this first, and render the background afterwards.
You may think that this wouldn't work because it puts the background in front of the other content, which makes it... not much of a background.
But here is where blending comes into play again. While blending content on top of other content is the most common way of using blending, you can also use it to render things behind existing content. To do this, you need a few things:
A framebuffer with alpha planes. How you request that depends on the window system/toolkit you use for your OpenGL setup. It's typically in the same area where you request your depth buffer, stencil buffer (if needed), etc. It is often specified as a number of alpha planes, which you typically set to 8.
The right blend function. For front to back blending, you typically use:
glBlendFunc(GL_ONE_MINUS_DST_ALPHA, GL_ONE);
This adds the new rendering where nothing was previously rendered (i.e. the alpha in the destination is 0), and will keep the previous rendering unchanged where there was already rendering (i.e. the destination alpha is 1).
The blending setup can get a little trickier if your rendering involves partial transparency.
This may look somewhat complicated, but it's really quite intuitive once you wrap your head around how the blend functions work. And I think it's overall an elegant and efficient solution for your overall problem.

Opengl depth buffer to cuda

I'm a new programmer to Opengl,
my aim is to retrieve the depth buffer into a FBO to be able to transfer to cuda without using glReadpixels.
Here is what I've already done:
void make_Fbo()
{
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glFramebufferRenderbuffer(GL_FRAMEBUFFER,
GL_DEPTH_ATTACHMENT,
GL_RENDERBUFFER,
fbo);
check_gl_error("make_fbo");
}
void make_render_buffer()
{
glGenRenderbuffers(1, &rb);
glBindRenderbuffer(GL_RENDERBUFFER, rb);
glRenderbufferStorage(GL_RENDERBUFFER,
GL_DEPTH_COMPONENT,
win.width,
win.height);
check_gl_error("make render_buffer");
}
This code create my FBO with correct depth values.
A new problem appear now, according to the article "fast triangle rasterization using irregular z-buffer on cuda"
It's not possible to acces to depth buffer attached to the FBO from Cuda.
Here is is the quote from the article:
Textures or render buffers can be attached onto the depth
attachment point of FBOs to accommodate the depth values. However, as far as
we have tested, they cannot be accessed by CUDA kernels. [...]
we managed to use the color attachment points on the FBO. Apparently
in this case we have to write a simple shader program to dump the depth values onto
the color channels of the frame buffer. According to the GLSL specification [KBR06],
the special variable gl_FragCoord
Are the statements still true?
What do you advise me to dump the depth buffer to the color channels?
to a texture ?
Well yes and no. The problem is that you can't access resources in CUDA while they are bound to the FBO.
As I understand it, with cudaGraphicsGLRegisterImage() you enable cuda access to any type of image data. So if you use a depth buffer that is a rendertarget and is NOT bound to the FBO, you can use it.
Here's the cuda API information:
https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__OPENGL.html#group__CUDART__OPENGL_1g80d12187ae7590807c7676697d9fe03d
And in this article they explain that you should round-robin or double-buffer the depth-buffer, or copy the data before using it in CUDA (but then you more or less void the whole idea of interop).
http://codekea.com/xLj7d1ya5gD6/modifying-opengl-fbo-texture-attachment-in-cuda.html

Draw the contents of the render buffer Object

Do not quite understand the operation render buffer object. For example if I want to show what is in the render buffer, I must necessarily do the render to texture?
GLuint fbo,color_rbo,depth_rbo;
glGenFramebuffers(1,&fbo);
glBindFramebuffer(GL_FRAMEBUFFER,fbo);
glGenRenderbuffersEXT(1, &color_rb);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, color_rb);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_RGBA8, 256, 256);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT,GL_RENDERBUFFER_EXT, color_rb);
glGenRenderbuffersEXT(1, &depth_rb);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, depth_rb);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT24, 256, 256);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT,GL_RENDERBUFFER_EXT, depth_rb);
if(glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT)!=GL_FRAMEBUFFER_COMPLETE_EXT)return 1;
glBindFramebuffer(GL_FRAMEBUFFER,0);
//main loop
//This does not work :-(
glBindFramebuffer(GL_FRAMEBUFFER,fbo);
glClearColor(0.0,0.0,0.0,1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
drawCube();
glBindFramebuffer(GL_FRAMEBUFFER,0);
any idea?
You are not going to see anything when you draw to an FBO instead of the default framebuffer, that is part of the point of FBOs.
Your options are:
Blit the renderbuffer into another framebuffer (in this case it would probably be GL_BACK for the default backbuffer)
Draw into a texture attachment and then draw texture-mapped primitives (e.g. triangles / quad) if you want to see the results.
Since 2 is pretty self-explanatory, I will explain option 1 in greater detail:
/* We are going to blit into the window (default framebuffer) */
glBindFramebuffer (GL_DRAW_FRAMEBUFFER, 0);
glDrawBuffer (GL_BACK); /* Use backbuffer as color dst. */
/* Read from your FBO */
glBindFramebuffer (GL_READ_FRAMEBUFFER, fbo);
glReadBuffer (GL_COLOR_ATTACHMENT0); /* Use Color Attachment 0 as color src. */
/* Copy the color and depth buffer from your FBO to the default framebuffer */
glBlitFramebuffer (0,0, width,height,
0,0, width,height,
GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT,
GL_NEAREST);
There are a couple of things worth mentioning here:
First, blitting from one framebuffer to another is often measurably slower than drawing two textured triangles that fill the entire viewport. Second, you cannot use linear filtering when you blit a depth or stencil image... but you can if you take the texture mapping approach (this only truly matters if the resolution of your source and destination buffers differ when blitting).
Overall, drawing a textured primitive is the more flexible solution. Blitting is most useful if you need to do Multisample Anti-Aliasing, because you would have to implement that in a shader otherwise and multisample texturing was added after Framebuffer Objects; some older hardware/drivers support FBOs but not multisample color (requires DX10 hardware) or depth (requires DX10.1 hardware) textures.

What is faster? glFramebufferTexture2D output flickers

inside my program I'm using glFramebufferTexture2D to set the target. But if I use it the output starts to flicker. If I use two frame buffers the output looks quite normal.
Has anybody an idea why that happens or what can be better inside the following source code? - that is an example and some not relevant code isn't inside.
// bind framebuffer for post process
::glBindFramebuffer(GL_DRAW_FRAMEBUFFER, m_SwapBuffer);
::glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_SwapBufferTargets[SwapBufferTarget1]->m_NativeTextureHandle, 0);
unsigned int DrawAttachments = { GL_COLOR_ATTACHMENT0 };
::glDrawBuffers(1, &DrawAttachments);
...
// render gaussian blur
m_Shader->Use();
::glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_BlurredImageFromPass1->m_NativeTextureHandle, 0);
_InputTexturePtr->ActivateTexture(GL_TEXTURE0);
RenderMesh();
::glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, _TargetTexturePtr->m_NativeTextureHandle, 0);
m_BlurredImageFromPass1->ActivateTexture(GL_TEXTURE0);
RenderMesh();
...
// copy swap buffer to system buffer
::glBindFramebuffer(GL_READ_FRAMEBUFFER, m_SwapBuffer);
::glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
::glBlitFramebuffer(0, 0, m_pConfig->m_Width, m_pConfig->m_Height, 0, 0, m_pConfig->m_Width, m_pConfig->m_Height, GL_COLOR_BUFFER_BIT, GL_NEAREST);
EDIT: I found the problem! It was inside my swap chain. I've rendered the original picture and after that a black one. So I get a flicker if frame rate drops.
This is probably better suited for a comment but is too large, so I will put it here. Your OpenGL semantics seem to be a little off in the following code segment:
::glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_BlurredImageFromPass1->m_NativeTextureHandle, 0);
_InputTexturePtr->ActivateTexture(GL_TEXTURE0);
RenderMesh();
::glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, _TargetTexturePtr->m_NativeTextureHandle, 0);
m_BlurredImageFromPass1->ActivateTexture(GL_TEXTURE0);
RenderMesh();
glActiveTexture (and thus your ActivateTexture wrapper) is purely for setting the active texture slot when binding a texture INPUT to a sampler in a shader program, and glFramebufferTexture2D is used in combination with glDrawBuffers to set the target OUTPUTS of your shader program. Thus, glActiveTexture and glFramebufferTexture2D should probably not be used on the same texture during the same draw operation. (Although I don't think this is what is causing your flicker) Additionally, I don't see where you bind/release your texture handles. It is generally good OpenGL practice to only bind objects when they are needed and release them immediately after. As OpenGL is a state machine, forgetting to release objects can really come and bite you in the ass on large projects.
Furthermore, when you bind a texture to a texture slot using glActiveTexture (or any glActiveTexture wrapper) always call glActiveTexture BEFORE you bind the texture handle.