I just read the following presentation which seemed to recommend RBOs over PBOs for GPU->CPU transfers. I have been looking for some source explaining RBOs, without success.
Anyone know of a good source explaining RBOs?
From opengl wiki:
Renderbuffer Objects are OpenGL
Objects that contain images. They are
created and used specifically with
Framebuffer Objects. They are
optimized for being used as render
targets, while Textures may not be.
more information here
An example on gamedev.net here (have a look at "Adding a Depth Buffer" section)
EDIT
When you render to a frame buffer you can choose between two types of framebuffer-attachable images; texture images and renderbuffer images. In the former case you will render to texture the frame buffer, in the latter you will obtain an offscreen rendering.
Here is a discussion on the difference between this two kind of frambuffer-attachable images.
Here you can find more information about FBO and attachable images.
Related
I succeeded in render to texture with Texturebuffer, using VAO and shaders.
But FBO has another options for color buffer, it's Renderbuffer. I searched a lot on the internet, but cannot found any example related to draw Renderbuffer as Texturebuffer with shaders
If I ain't wrong, Renderbuffer is released in OpenGL 3.30, and it's faster than Texturebuffer.
Can I use Renderbuffer as Texturebuffer? (stupid question huh? I think it should be absolutely, isn't it?)
If yes, please lead me or give any example to draw render buffer as texture buffer.
My target is just for study, but I'd like to know is that a better way to draw textures? Should we use it frequently?
First of all, don't use the term "texture buffer" when you really just mean texture. A "buffer texture"/"texture buffer object" is a different conecpt, completely unrelated here.
If I ain't wrong, Renderbuffer is released in OpenGL 3.30, and it's faster than Texturebuffer.
No. Renderbuffers were there when FBOs were first invented. One being faster than the other is not generally true either, but these are implementation details. But it is also irrelevant.
Can I use Renderbuffer as Texturebuffer? (stupid question huh? I think it should be absolutely, isn't it?)
Nope. You cant use the contents of a renderbuffer directly as a source for texture mapping. Renderbuffesr are just abstract memory regions the GPU renders to, and they are not in the format required for texturing. You can read back the results to the CPU using glReadPixels, our you could copy the data into a texture object, e.g. via glCopyTexSubImage - but that would be much slower than directly rendering into textures.
So renderbuffers are good for a different set of use cases:
offscreen rendering (e.g. where the image results will be written to a file, or encoded to a video)
as helper buffers during rendering, like the depth buffer or stencil buffer, where you do not care anbout the final contents of these buffers anyway
as intermediate buffer when the image data can't be directly used by the follwoing steps, e.g. when using multisampling, and copying the result to a non-multisampled framebuffer or texture
It appears that you have your terminology mixed up.
You attach images to Framebuffer Objects. Those images can either be a Renderbuffer Object (this is an offscreen surface that has very few uses besides attaching and blitting) or they can be part of a Texture Object.
Use whichever makes sense. If you need to read the results of your drawing in a shader then obviously you should attach a texture. If you just need a depth buffer, but never need to read it back, a renderbuffer might be fine. Some older hardware does not support multisampled textures, so that is another situation where you might favor renderbuffers over textures.
Performance wise, do not make any assumptions. You might think that since renderbuffers have a lot fewer uses they would somehow be quicker, but that's not always the case. glBlitFramebuffer (...) can be slower than drawing a textured quad.
I want to read the depth component of a scene rendered to a framebuffer object. I initially used glReadPixels() but found that it could only read pixels from the default framebuffer.
The answers to some relevant questions on this website suggest using PBO, but I haven't tried it yet. It seems that the PBO reading is asynchronous, therefore, using which command can synchronize the reading at the end?
A PBO won't help you here, because those are just a different kind of buffer to read into (instead of memory on the host into memory of the OpenGL implementation).
The usual way to go about making a depth component back-readable in OpenGL is to use a depth texture, attached to the depth attachment and after rendering using glGetTexImage to retrieve the date.
In the case of a normal color attachment you could use glReadPixels with a previous call to glReadBuffer to select the GL_COLOR_ATTACHMENT<i> of the bound FBO.
I have been using several custom FBOs. FBO-A has MSAA texture attached into which the geometry is rendered.Then it is resolved by blitting MSAA texture attachment of FBO-A into regular tex 2d attachment of FBO-B.This procedure implies switching between multiple FBOs and it is stated in several sources that it is more performance wise to rather switch between the attachments than between different FBOs. I tried to set both MSAA texture and the regular one attached to the same FBO. But I found I can't do the resolve by blitting. If I do texture copy from MSAA to the regular one ,will the MSAA be resolved as with blitting?
UPDATE:
Just for those interested to know whether it's worth (performance wise) to use several FBOs vs several attachments in a single FBO.
I just did a test (NVidia Quadro 4000) and the result was pretty identical FPS (+= 15-20 frames).it is probably hardware and OGL implementation dependent though.
I tried to set both MSAA texture and the regular one attached to the same FBO. But I found I can't do the resolve by blitting.
Of course not. In order to do a blit, the source and destination framebuffers must be complete. And one of the rules of completeness states that all of the attached images must have the same number of samples.
If I do texture copy from MSAA to the regular one ,will the MSAA be resolved as with blitting?
What do you mean by a "texture copy?"
If you're talking about using the new 4.3/ARB_copy_image glCopyImageSubData, then no. Again, the sample counts of the source and destination images must match.
If you're talking about copying from framebuffers to textures using glCopyTexSubImage2D and the like, then yes, that will perform a multisample resolve.
However, you really should just do the blit.
In what cases would I want to have a renderbuffer attachment in an OpenGL FBO, instead of a texture attachment, besides for the default framebuffer? As, a texture attachment seems far more versatile.
Textures provide more features to you (sampling!, formats variety) and hence are more likely subject to performance loss.
The answer is simple: use Textures wherever you have to sample from the surface (no alternatives).
Use Render buffers wherever you don't need to sample. The driver may or may not decide to store your pixel data more effectively based on your expressed intention of not doing sampling.
You can use GL blitting afterwards to do something with the result.
Extending the question to OpenGL|ES, another reason to use RenderBuffers instead of textures is also that textures may not be supported in some configurations and prevent you from building a valid FBO. I specifically think about depth textures, which are not natively supported on some hardware, for instance nVidia Tegra 2/3.
Lets say i have an application ( the details of the application should be irrelevent for solving the problem ). Instead of rendering to the screen, i am somehow able to force the application to render to a framebuffer object instead of rendering to the screen ( messing with glew or intercepting a call in a dll ).
Once the application has rendered its content to the FBO is it possible to apply a shader to the contents of the FB? My knowledge is limited here, so from what i understand at this stage all information about vertices is no longer available and all the necessary tests have been applied, so whats left in the buffer is just pixel data. Is this correct?
If it is possible to apply a shader to the FBO, is is possible to get a fisheye affect? ( like this for example: http://idea.hosting.lv/a/gfx/quakeshots.html )
The technique used in the linke above is to create 6 different viewports and render each viewport to a cubemap face and then apply the texture to a mesh.
Thanks
A framebuffer object encapsulates several other buffers, specifically those that are implicitly indexed by fragment location. So a single framebuffer object may bundle together a colour buffer, a depth buffer, a stencil buffer and a bunch of others. The individual buffers are known as renderbuffers.
You're right — there's no geometry in there. For the purposes of reading back the scene you get only final fragment values, which if you're highjacking an existing app will probably be a 2d pixel image of the frame and some other things that you don't care about.
If your GPU has render-to-texture support (originally an extension circa OpenGL 1.3 but you'd be hard pressed to find a GPU without it nowadays, even in mobile phones) then you can link a texture as a renderbuffer within a framebuffer. So the rendering code is exactly as it would be normally but ends up writing the results to a texture that you can then use as a source for drawing.
Fragment shaders can programmatically decide which location of a texture map to sample in order to create their output. So you can write a fragment shader that applies a fisheye lens, though you're restricted to the field of view rendered in the original texture, obviously. Which would probably be what you'd get in your Quake example if you had just one of the sides of the cube available rather than six.
In summary: the answer is 'yes' to all of your questions. There's a brief introduction to framebuffer objects here.
Look here for some relevant info:
http://www.opengl.org/wiki/Framebuffer_Object
The short, simple explanation is that a FBO is the 3D equivalent of a software frame buffer. You have direct access to individual pixels, instead of having to modify a texture and upload it. You can get shaders to point to an FBO. The link above gives an overview of the procedure.