How to draw Renderbuffer as Texturebuffer in FBO? - opengl

I succeeded in render to texture with Texturebuffer, using VAO and shaders.
But FBO has another options for color buffer, it's Renderbuffer. I searched a lot on the internet, but cannot found any example related to draw Renderbuffer as Texturebuffer with shaders
If I ain't wrong, Renderbuffer is released in OpenGL 3.30, and it's faster than Texturebuffer.
Can I use Renderbuffer as Texturebuffer? (stupid question huh? I think it should be absolutely, isn't it?)
If yes, please lead me or give any example to draw render buffer as texture buffer.
My target is just for study, but I'd like to know is that a better way to draw textures? Should we use it frequently?

First of all, don't use the term "texture buffer" when you really just mean texture. A "buffer texture"/"texture buffer object" is a different conecpt, completely unrelated here.
If I ain't wrong, Renderbuffer is released in OpenGL 3.30, and it's faster than Texturebuffer.
No. Renderbuffers were there when FBOs were first invented. One being faster than the other is not generally true either, but these are implementation details. But it is also irrelevant.
Can I use Renderbuffer as Texturebuffer? (stupid question huh? I think it should be absolutely, isn't it?)
Nope. You cant use the contents of a renderbuffer directly as a source for texture mapping. Renderbuffesr are just abstract memory regions the GPU renders to, and they are not in the format required for texturing. You can read back the results to the CPU using glReadPixels, our you could copy the data into a texture object, e.g. via glCopyTexSubImage - but that would be much slower than directly rendering into textures.
So renderbuffers are good for a different set of use cases:
offscreen rendering (e.g. where the image results will be written to a file, or encoded to a video)
as helper buffers during rendering, like the depth buffer or stencil buffer, where you do not care anbout the final contents of these buffers anyway
as intermediate buffer when the image data can't be directly used by the follwoing steps, e.g. when using multisampling, and copying the result to a non-multisampled framebuffer or texture

It appears that you have your terminology mixed up.
You attach images to Framebuffer Objects. Those images can either be a Renderbuffer Object (this is an offscreen surface that has very few uses besides attaching and blitting) or they can be part of a Texture Object.
Use whichever makes sense. If you need to read the results of your drawing in a shader then obviously you should attach a texture. If you just need a depth buffer, but never need to read it back, a renderbuffer might be fine. Some older hardware does not support multisampled textures, so that is another situation where you might favor renderbuffers over textures.
Performance wise, do not make any assumptions. You might think that since renderbuffers have a lot fewer uses they would somehow be quicker, but that's not always the case. glBlitFramebuffer (...) can be slower than drawing a textured quad.

Related

How to render offscreen on OpenGL? [duplicate]

This question already has answers here:
How to use GLUT/OpenGL to render to a file?
(6 answers)
Closed 9 years ago.
My aim is to render OpenGL scene without a window, directly into a file. The scene may be larger than my screen resolution is.
How can I do this?
I want to be able to choose the render area size to any size, for example 10000x10000, if possible?
It all starts with glReadPixels, which you will use to transfer the pixels stored in a specific buffer on the GPU to the main memory (RAM). As you will notice in the documentation, there is no argument to choose which buffer. As is usual with OpenGL, the current buffer to read from is a state, which you can set with glReadBuffer.
So a very basic offscreen rendering method would be something like the following. I use c++ pseudo code so it will likely contain errors, but should make the general flow clear:
//Before swapping
std::vector<std::uint8_t> data(width*height*4);
glReadBuffer(GL_BACK);
glReadPixels(0,0,width,height,GL_BGRA,GL_UNSIGNED_BYTE,&data[0]);
This will read the current back buffer (usually the buffer you're drawing to). You should call this before swapping the buffers. Note that you can also perfectly read the back buffer with the above method, clear it and draw something totally different before swapping it. Technically you can also read the front buffer, but this is often discouraged as theoretically implementations were allowed to make some optimizations that might make your front buffer contain rubbish.
There are a few drawbacks with this. First of all, we don't really do offscreen rendering do we. We render to the screen buffers and read from those. We can emulate offscreen rendering by never swapping in the back buffer, but it doesn't feel right. Next to that, the front and back buffers are optimized to display pixels, not to read them back. That's where Framebuffer Objects come into play.
Essentially, an FBO lets you create a non-default framebuffer (like the FRONT and BACK buffers) that allow you to draw to a memory buffer instead of the screen buffers. In practice, you can either draw to a texture or to a renderbuffer. The first is optimal when you want to re-use the pixels in OpenGL itself as a texture (e.g. a naive "security camera" in a game), the latter if you just want to render/read-back. With this the code above would become something like this, again pseudo-code, so don't kill me if mistyped or forgot some statements.
//Somewhere at initialization
GLuint fbo, render_buf;
glGenFramebuffers(1,&fbo);
glGenRenderbuffers(1,&render_buf);
glBindRenderbuffer(render_buf);
glRenderbufferStorage(GL_RENDERBUFFER, GL_BGRA8, width, height);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER​,fbo);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, render_buf);
//At deinit:
glDeleteFramebuffers(1,&fbo);
glDeleteRenderbuffers(1,&render_buf);
//Before drawing
glBindFramebuffer(GL_DRAW_FRAMEBUFFER​,fbo);
//after drawing
std::vector<std::uint8_t> data(width*height*4);
glReadBuffer(GL_COLOR_ATTACHMENT0);
glReadPixels(0,0,width,height,GL_BGRA,GL_UNSIGNED_BYTE,&data[0]);
// Return to onscreen rendering:
glBindFramebuffer(GL_DRAW_FRAMEBUFFER​,0);
This is a simple example, in reality you likely also want storage for the depth (and stencil) buffer. You also might want to render to texture, but I'll leave that as an exercise. In any case, you will now perform real offscreen rendering and it might work faster then reading the back buffer.
Finally, you can use pixel buffer objects to make read pixels asynchronous. The problem is that glReadPixels blocks until the pixel data is completely transfered, which may stall your CPU. With PBO's the implementation may return immediately as it controls the buffer anyway. It is only when you map the buffer that the pipeline will block. However, PBO's may be optimized to buffer the data solely on RAM, so this block could take a lot less time. The read pixels code would become something like this:
//Init:
GLuint pbo;
glGenBuffers(1,&pbo);
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo);
glBufferData(GL_PIXEL_PACK_BUFFER, width*height*4, NULL, GL_DYNAMIC_READ);
//Deinit:
glDeleteBuffers(1,&pbo);
//Reading:
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo);
glReadPixels(0,0,width,height,GL_BGRA,GL_UNSIGNED_BYTE,0); // 0 instead of a pointer, it is now an offset in the buffer.
//DO SOME OTHER STUFF (otherwise this is a waste of your time)
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo); //Might not be necessary...
pixel_data = glMapBuffer(GL_PIXEL_PACK_BUFFER, GL_READ_ONLY);
The part in caps is essential. If you just issue a glReadPixels to a PBO, followed by a glMapBuffer of that PBO, you gained nothing but a lot of code. Sure the glReadPixels might return immediately, but now the glMapBuffer will stall because it has to safely map the data from the read buffer to the PBO and to a block of memory in main RAM.
Please also note that I use GL_BGRA everywhere, this is because many graphics cards internally use this as the optimal rendering format (or the GL_BGR version without alpha). It should be the fastest format for pixel transfers like this. I'll try to find the nvidia article I read about this a few monts back.
When using OpenGL ES 2.0, GL_DRAW_FRAMEBUFFER might not be available, you should just use GL_FRAMEBUFFER in that case.
I'll assume that creating a dummy window (you don't render to it; it's just there because the API requires you to make one) that you create your main context into is an acceptable implementation strategy.
Here are your options:
Pixel buffers
A pixel buffer, or pbuffer (which isn't a pixel buffer object), is first and foremost an OpenGL context. Basically, you create a window as normal, then pick a pixel format from wglChoosePixelFormatARB (pbuffer formats must be gotten from here). Then, you call wglCreatePbufferARB, giving it your window's HDC and the pixel buffer format you want to use. Oh, and a width/height; you can query the implementation's maximum width/heights.
The default framebuffer for pbuffer is not visible on the screen, and the max width/height is whatever the hardware wants to let you use. So you can render to it and use glReadPixels to read back from it.
You'll need to share you context with the given context if you have created objects in the window context. Otherwise, you can use the pbuffer context entirely separately. Just don't destroy the window context.
The advantage here is greater implementation support (though most drivers that don't support the alternatives are also old drivers for hardware that's no longer being supported. Or is Intel hardware).
The downsides are these. Pbuffers don't work with core OpenGL contexts. They may work for compatibility, but there is no way to give wglCreatePbufferARB information about OpenGL versions and profiles.
Framebuffer Objects
Framebuffer Objects are more "proper" offscreen rendertargets than pbuffers. FBOs are within a context, while pbuffers are about creating new contexts.
FBOs are just a container for images that you render to. The maximum dimensions that the implementation allows can be queried; you can assume it to be GL_MAX_VIEWPORT_DIMS (make sure an FBO is bound before checking this, as it changes based on whether an FBO is bound).
Since you're not sampling textures from these (you're just reading values back), you should use renderbuffers instead of textures. Their maximum size may be larger than those of textures.
The upside is the ease of use. Rather than have to deal with pixel formats and such, you just pick an appropriate image format for your glRenderbufferStorage call.
The only real downside is the narrower band of hardware that supports them. In general, anything that AMD or NVIDIA makes that they still support (right now, GeForce 6xxx or better [note the number of x's], and any Radeon HD card) will have access to ARB_framebuffer_object or OpenGL 3.0+ (where it's a core feature). Older drivers may only have EXT_framebuffer_object support (which has a few differences). Intel hardware is potluck; even if they claim 3.x or 4.x support, it may still fail due to driver bugs.
If you need to render something that exceeds the maximum FBO size of your GL implementation libtr works pretty well:
The TR (Tile Rendering) library is an OpenGL utility library for doing
tiled rendering. Tiled rendering is a technique for generating large
images in pieces (tiles).
TR is memory efficient; arbitrarily large image files may be generated
without allocating a full-sized image buffer in main memory.
The easiest way is to use something called Frame Buffer Objects (FBO). You will still have to create a window to create an opengl context though (but this window can be hidden).
The easiest way to fulfill your goal is using FBO to do off-screen render. And you don't need to render to texture, then get the teximage. Just render to buffer and use function glReadPixels. This link will be useful. See Framebuffer Object Examples

GLSL Renderbuffer really required?

I am trying to write a program that writes video camera frames into a quad.
I saw tutorials explaining that with framebuffers can be faster, but still learning how to do it.
But then besides the framebuffer, I found that there is also renderbuffers.
The question is, if the purpose is only to write a texture into a quad that will fill up the screen, do I really need a renderbuffer?
I understand that renderbuffers are for depth testing, which I think is only for checking Z position of the pixel, therefore would be silly to have to create a render buffer for my scenario, correct?
A framebuffer object is a place to stick images so that you can render to them. Color buffers, depth buffers, etc all go into a framebuffer object.
A renderbuffer is like a texture, but with two important differences:
It is always 2D and has no mipmaps. So it's always exactly 1 image.
You cannot read from a renderbuffer. You can attach them to an FBO and render to them, but you can't sample from them with a texture access or something.
So you're talking about two mostly separate concepts. Renderbuffers do not have to be "for depth testing." That is a common use case for renderbuffers, because if you're rendering the colors to a texture, you usually don't care about the depth. You need a depth because you need depth testing for hidden-surface removal. But you don't need to sample from that depth. So instead of making a depth texture, you make a depth renderbuffer.
But renderbuffers can also use colors rather than depth formats. You just can't attach them as textures. You can still blit from/to them, and you can still read them back with glReadPixels. You just can't read from them in a shader.
Oddly enough, this does nothing to answer your question:
The question is, if the purpose is only to write a texture into a quad that will fill up the screen, do I really need a renderbuffer?
I don't see why you need a framebuffer or a renderbuffer of any kind. A texture is a texture; just draw a textured quad.

How to render Framebuffer Objects on multi-sampled textures?

I currently have a rendering engine using multiple passes in which various parts of the image are rendered on textures, and then combined using shaders. It works, and now I would like to activate multi-sampling.
I read here ( http://www.opengl.org/wiki/Framebuffer_Object_Examples#MSAA ) that, with OpenGL, you can't attach a GL_TEXTURE2D_MULTISAMPLE to a framebuffer object.
It seems one way to use multi-sampling and still have access to the result as texture is to use a multi-sampled render buffer, and then copy the result into a multisample texture.
My question is: what would be the best way to go forward?
Is it possible to render in a render buffer and use the output in my shader, without copying into a texture?
Should I indeed copy the content of the buffer into a texture, and then use it?
Is there another, better, solution?
Thanks.
I read here ( http://www.opengl.org/wiki/Framebuffer_Object_Examples#MSAA ) that, with OpenGL, you can't attach a GL_TEXTURE2D_MULTISAMPLE to a framebuffer object.
Read it again. It says nothing about GL_TEXTURE_2D_MULTISAMPLE textures. Actually, I take that back: don't read that page again. If you want good FBO info, read the page on Framebuffer Objects that explains 3.x behavior. The page you linked to is old.
Back in the EXT days, all you had were multisampled renderbuffers, because multisample textures didn't exist. You could create multisampled buffers, but you couldn't texture with them. You could only blit them.
In 3.3 OpenGL, you can create multisampled textures. And you can attach them just like any other texture to an FBO.

OpenGL FBO renderbuffer or texture attatchment

In what cases would I want to have a renderbuffer attachment in an OpenGL FBO, instead of a texture attachment, besides for the default framebuffer? As, a texture attachment seems far more versatile.
Textures provide more features to you (sampling!, formats variety) and hence are more likely subject to performance loss.
The answer is simple: use Textures wherever you have to sample from the surface (no alternatives).
Use Render buffers wherever you don't need to sample. The driver may or may not decide to store your pixel data more effectively based on your expressed intention of not doing sampling.
You can use GL blitting afterwards to do something with the result.
Extending the question to OpenGL|ES, another reason to use RenderBuffers instead of textures is also that textures may not be supported in some configurations and prevent you from building a valid FBO. I specifically think about depth textures, which are not natively supported on some hardware, for instance nVidia Tegra 2/3.

What are the differences between a Frame Buffer Object and a Pixel Buffer Object in OpenGL?

What is the difference between FBO and PBO? Which one should I use for off-screen rendering?
What is the difference between FBO and PBO?
A better question is how are they similar. The only thing that is similar about them is their names.
A Framebuffer Object (note the capitalization: framebuffer is one word, not two) is an object that contains multiple images which can be used as render targets.
A Pixel Buffer Object is:
A Buffer Object. FBOs are not buffer objects. Again: framebuffer is one word.
A buffer object that is used for asynchronous uploading/downloading of pixel data to/from images.
If you want to render to a texture or just a non-screen framebuffer, then you use FBOs. If you're trying to read pixel data back to your application asynchronously, or you're trying to transfer pixel data to OpenGL images asynchronously, then you use PBOs.
They're nothing alike.
A FBO (Framebuffer object) is a target where you can render images other than the default frame buffer or screen.
A PBO (Pixel Buffer Object) allows asynchronous transfers of pixel data to and from the device. This can be helpful to improve overall performance when rendering if you have other things that can be done while waiting for the pixel transfer.
I would read VBOs, PBOs and FBOs:
Apple has posted two very nice bits of
sample code demonstrating PBOs and
FBOs. Even though these are
Mac-specific, as sample code they're
good on any platoform because PBOs and
FBOs are OpenGL extensions, not
windowing system extensions.
So what are all these objects? Here's
the situation:
I want to highlight something.
FBO it not memory block. I think it look like struct of pointer. You Must attach Texture to FBO to use it. After attach Texture you now can draw to it for offscreen render or for second pass effect.
struct FBO{
AttachColor0 *ptr0;
AttachColor1 *ptr1;
AttachColor2 *ptr2;
AttachDepth *ptr3;
};
In the other hand, PBO is memory block "block to hold type of memory. "Try to think of it as malloc of x size, then you can use memcpy to copy data from it to texture/FBO or to it".
Why to use PBO?
Create intermediate memory buffer to interface with Host memory and not stop OpenGL drawing will upload texture to or from host.