OpenGL FBO renderbuffer or texture attatchment - opengl

In what cases would I want to have a renderbuffer attachment in an OpenGL FBO, instead of a texture attachment, besides for the default framebuffer? As, a texture attachment seems far more versatile.

Textures provide more features to you (sampling!, formats variety) and hence are more likely subject to performance loss.
The answer is simple: use Textures wherever you have to sample from the surface (no alternatives).
Use Render buffers wherever you don't need to sample. The driver may or may not decide to store your pixel data more effectively based on your expressed intention of not doing sampling.
You can use GL blitting afterwards to do something with the result.

Extending the question to OpenGL|ES, another reason to use RenderBuffers instead of textures is also that textures may not be supported in some configurations and prevent you from building a valid FBO. I specifically think about depth textures, which are not natively supported on some hardware, for instance nVidia Tegra 2/3.

Related

Can we sample texture attached to inactive FBO slot?

Can I attach two textures to one FBO, and switch between them using glDrawBuffers, binding the inactive one as shader input? This seems much more efficient than switching FBOs for multipass effects.
If we're assuming you don't have access to OpenGL 4.5/ARB/NV_texture_barrier, no you cannot. The part of the OpenGL specification that forbids feedback loops on framebuffer attached images does not care whether the image can be written to or not. This is also true for array layers or mipmap levels; reading from one layer while writing to another layer will not save you.
All that matters is attachment. You must either bind a new FBO that doesn't have the texture attached, or remove the attachment from the current FBO.
Though again, texture barrier functionality makes everything I said irrelevant. And considering how widespread it is, it's really not something you should be concerned about.

Writing to depth buffer from opengl compute shader

Generally on modern desktop OpenGL hardware what is the best way to fill a depth buffer from a compute shader and then use that depth buffer for graphics pipeline rendering with triangles etc?
Specifically I am wondering about concerns regards HiZ. Also I wonder if it's better to do compute shader modifications to the depth buffer before or after the graphics rendering?
If the compute shader is run after the graphics rendering I assume the depth buffer will typically be decompressed behind the scenes. But I worry done the other way around the depth buffer may be in a decompressed/non-optimal state for the graphics pipeline?
As far as i know, you cannot bind textures with any of the depth formats as images, and thus cannot write to depth format textures in compute shaders. See glBindImageTexture documentation, it lists the formats that your texture format must be compatible to. Depth formats are not among them and the specification says the depth formats are not compatible to the normal formats.
Texture copying functions have the same compatibility restrictions, so you can't even e.g. write to a normal texture in the compute shader and then copy to a depth texture. glCopyImageSubData does not explicitly have that restriction but i haven't tried it and it's not part of the core profile anymore.
What might work is writing to a normal texture, then rendering a fullscreen triangle and setting gl_FragDepth to values read from the texture, but that's an additional fullscreen pass.
I don't quite understand your second question - if your compute shader stuff modifies the depth buffer, the result will most likely be different depending on whether you do it before or after regular rendering because different parts will be visible or occluded.
But maybe that question is moot since it seems you cannot manually write into depth buffers at all - which might also answer your third question - by not writing into depth buffers you cannot mess with the compression of it :)
Please note that i'm no expert in this, i had a similar problem and looked at the docs/spec myself, so this all might be wrong :) Please let me know if you manage to write to depth buffers with compute shaders!

OpenGL: Post-Processing + Multisampling =?

I'm fairly new to OpenGL and trying to figure out how to add a post-processing stage to my scene rendering. What I believe I know so far is that I create an FBO, render the scene to that, and then I can render to the back buffer using my post-processing shader with the texture from the FBO as the input.
But where this goes beyond my knowledge is when multisampling gets thrown in. The FBO must be multisampled. That leaves two possibilities: 1. the post-process shader operates 1:1 on subsamples to generate the final multisampled screen output, or 2. the shader must resolve the multiple samples and output a single screen fragment for each screen pixel. How can these be done?
Well, option 1 is supported in the GL via the features braught in via GL_ARB_texture_multisample (in core since GL 3.2). Basically, this brings new multisample texture types, and the corresponding samplers like sampler2DMS, where you explicitely can fetch from a particular sample index. If this approach can be efficiently used to implement your post-processing effect, I don't know.
Option 2 is a little bit different than what you describe. Not the shader will do the multisample resolve. You can render into a multisample FBO (don't need a texture for that, a renderbuffer will do as well) and do the resolve explicitely using glBlitFramebuffer, into another, non-multisampled FBO (this time, with a texture). This non-multisamples texture can then be used as input for the post-processing. And neither the post-processing nor the default framebuffer need to be aware of multisampling at all.

How to draw Renderbuffer as Texturebuffer in FBO?

I succeeded in render to texture with Texturebuffer, using VAO and shaders.
But FBO has another options for color buffer, it's Renderbuffer. I searched a lot on the internet, but cannot found any example related to draw Renderbuffer as Texturebuffer with shaders
If I ain't wrong, Renderbuffer is released in OpenGL 3.30, and it's faster than Texturebuffer.
Can I use Renderbuffer as Texturebuffer? (stupid question huh? I think it should be absolutely, isn't it?)
If yes, please lead me or give any example to draw render buffer as texture buffer.
My target is just for study, but I'd like to know is that a better way to draw textures? Should we use it frequently?
First of all, don't use the term "texture buffer" when you really just mean texture. A "buffer texture"/"texture buffer object" is a different conecpt, completely unrelated here.
If I ain't wrong, Renderbuffer is released in OpenGL 3.30, and it's faster than Texturebuffer.
No. Renderbuffers were there when FBOs were first invented. One being faster than the other is not generally true either, but these are implementation details. But it is also irrelevant.
Can I use Renderbuffer as Texturebuffer? (stupid question huh? I think it should be absolutely, isn't it?)
Nope. You cant use the contents of a renderbuffer directly as a source for texture mapping. Renderbuffesr are just abstract memory regions the GPU renders to, and they are not in the format required for texturing. You can read back the results to the CPU using glReadPixels, our you could copy the data into a texture object, e.g. via glCopyTexSubImage - but that would be much slower than directly rendering into textures.
So renderbuffers are good for a different set of use cases:
offscreen rendering (e.g. where the image results will be written to a file, or encoded to a video)
as helper buffers during rendering, like the depth buffer or stencil buffer, where you do not care anbout the final contents of these buffers anyway
as intermediate buffer when the image data can't be directly used by the follwoing steps, e.g. when using multisampling, and copying the result to a non-multisampled framebuffer or texture
It appears that you have your terminology mixed up.
You attach images to Framebuffer Objects. Those images can either be a Renderbuffer Object (this is an offscreen surface that has very few uses besides attaching and blitting) or they can be part of a Texture Object.
Use whichever makes sense. If you need to read the results of your drawing in a shader then obviously you should attach a texture. If you just need a depth buffer, but never need to read it back, a renderbuffer might be fine. Some older hardware does not support multisampled textures, so that is another situation where you might favor renderbuffers over textures.
Performance wise, do not make any assumptions. You might think that since renderbuffers have a lot fewer uses they would somehow be quicker, but that's not always the case. glBlitFramebuffer (...) can be slower than drawing a textured quad.

What is an OpenGL RBO?

I just read the following presentation which seemed to recommend RBOs over PBOs for GPU->CPU transfers. I have been looking for some source explaining RBOs, without success.
Anyone know of a good source explaining RBOs?
From opengl wiki:
Renderbuffer Objects are OpenGL
Objects that contain images. They are
created and used specifically with
Framebuffer Objects. They are
optimized for being used as render
targets, while Textures may not be.
more information here
An example on gamedev.net here (have a look at "Adding a Depth Buffer" section)
EDIT
When you render to a frame buffer you can choose between two types of framebuffer-attachable images; texture images and renderbuffer images. In the former case you will render to texture the frame buffer, in the latter you will obtain an offscreen rendering.
Here is a discussion on the difference between this two kind of frambuffer-attachable images.
Here you can find more information about FBO and attachable images.