What are the usage differences between multiple viewports vs multiple framebuffers? - opengl

I'm quite new learn OpenGL and the topic of framebuffers is quite confusing to me. I read that some uses of it is to generate a second target to render to, allowing us to make mirrors etc. However, wouldn't the use of glViewport() be enough, where you can specify different rendering operations to different parts of the same framebuffer? What are the usage differences between the 2 and when should I prefer to use each one?

The two concepts are entirely orthogonal; they do not at all map to each other.
In theory, you can perform some forms of off-screen rendering by rendering to a non-visible part of the framebuffer's images. Assuming that there are non-visible parts of the framebuffer's images. Which, assuming we're talking about the default framebuffer, is entirely out of your control. You don't get to determine how big the backing image data is; that's controlled by the OS, typically through the size of the window you're rendering to.
With a proper framebuffer object, you don't have to try to make the OS give you more space than is visible in the window. You can just do whatever.
The viewport exists to allow you to pick where you render to within the renderable area of a framebuffer. A framebuffer object exists to allow you to allocate and manage the images you render to, thus giving you complete control over all aspects of the framebuffer. This includes using non-visible color formats (integer formats, floating-point formats, for example), using multiple render targets at once, having direct control over multisample usage, and being able to read from the render targets as textures without having to do a slow copy operation from the framebuffer.
The two really don't have anything to do with one another.

Related

Create "Union" of two masking images in OpenGL

for a current 2D project I am rendering different objects on a scene.
On top of this I render images which have a cut out part, for example a transparent circle on a black image. When moving the cut-out circle, this creates the effect that of course only the within the transparent part, the background objects are visible.
Now I want to add a second masking layer with a different transparent shape on it and create a union of these two, showing the background images underneath each of the transparent parts.
The following images show an example illustration:
Background objects
Masking image 1
Masking image 2
Desired Result
For rendering, I am using libgdx with OpenGL 2.0 and scene2d as scenegraph. Basically, the background objects are added as actors onto a stage and then another Group-object rendering the masks.
Now I've tried by setting the Blending-function while rendering the masks, but I can't figure out if its possible to "unionize" the alpha values of each mask. But is that even possible?
I've though about using stencil buffers but I can't get this to work yet. I would be thankful if anybody could give me an approach on how to achieve this effect. Also, using stencil buffers would result in a pretty chopped of edge as the mask is either 0 or 1, correct?
A potential approach could be to use render-to-texture and compositing manually. I'm saying "potential", because there's hardly one best way here. Using built-in blending modes can certainly have some performance gains, but it limits you to the provided blending function parameters. While certainly doable with stuff like rendering the mask to the framebuffer alpha channel, and then using that with GL_DST_ALPHA/GL_ONE_MINUS_DST_ALPHA, it gets tricky once your layout gets more complex.
Render-to-texture, OTOH, has no such drawback. You're taking the control of the entire compositing function and have the freedom to do whatever processing you wish. To elaborate a bit, the rendering would work like this:
Set up a texture for the objects, and render your objects to it.
Set up a texture for the mask - this could be e.g. one-channel 8-bit. Retarget the rendering to it, and render the mask with a shader that outputs the mask value.
If you want to add another mask, you can either render more stuff to the same mask texture, or create yet another one.
Crucially, it doesn't matter which order the above operations are done, because they're completely separate and don't impact each other; in fact, if the mask doesn't change, you don't even need to re-render it.
Render a full-screen quad with your compositing shader, taking those two textures as inputs (uniforms).
So, to sum up, render-to-texture is a bit more flexible in terms of the compositing operation, gives you a way to do other post-effects like motiong blur, and gives you more leeway in the order of operations. OTOH, it imposes a certain limit on the number of textures or passes, uses more memory (since you'll be keeping the intermediate textures in, as opposed to just working on one framebuffer), and might have a performance penalty.
If you decide to stick to the built-in blending, it gets a bit trickier. Typically you'll want to have alpha 0 as "no image", and 1 as "all image", but in this case it might be better to think about it as a mask, where 0 is "no mask" and 1 is "full mask". Then, the blend func for the mask could simply be GL_ONE/GL_ONE, and for the final image GL_ZERO/GL_ONE_MINUS_DST_ALPHA. That certainly restricts your ability to actually do blending and masking at the same time.
There exists a function called glBlendFuncSeparate that might make it a bit more flexible, but that's still not gonna give you as many possibilities as the method mentioned above.
Alternatively, actually learning how to set up stencil buffer would solve that specific issue, since the stencil buffer is made with specifically this use in mind. There's a lot of tutorials online, but basically it amounts to a few calls of glStencil(Op|Func|Mask), optionally with disabling the writes to the color buffer with glColorMask.

Partially render a 3D scene

I want to partially render a 3D scene, by this I mean I want to render some pixels and skip others. There are many non-realtime renderers that allow selecting a section that you want to render.
Example, fully rendered image (all pixels rendered) vs partially rendered:
I want to make the renderer not render part of a scene, in this case the renderer would just skip rendering these areas and save resources (memory/CPU).
If it's not possible to do in OpenGL, can someone suggest any other open source renderer, it could even be a software renderer.
If you're talking about rendering rectangular subportions of a display, you'd use glViewport and adjust your projection appropriately.
If you want to decide whether to render or not per pixel, especially with the purely fixed pipeline, you'd likely use a stencil buffer. That does exactly much the name says — you paint as though spraying through a stencil. It's a per-pixel mask, reliably at least 8 bits per pixel, and has supported in hardware for at least the last fifteen years. Amongst other uses, it used to be how you could render a stipple without paying for the 'professional' cards that officially supported glStipple.
With GLSL there is also the discard statement that immediately ends processing of a fragment and produces no output. The main caveat is that on some GPUs — especially embedded GPUs — the advice is to prefer returning any colour with an alpha of 0 (assuming that will have no effect according to your blend mode) if you can avoid a conditional by doing so. Conditionals and discards otherwise can have a strong negative effect on parallelism as fragment shaders are usually implemented by SIMD units doing multiple pixels simultaneously, so any time that a shader program look like they might diverge there can be a [potentially unnecessary] splitting of tasks. Very GPU dependent stuff though, so be sure to profile in real life.
EDIT: as pointed out in the comments, using a scissor rectangle would be smarter than adjusting the viewport. That both means you don't have to adjust your projection and, equally, that rounding errors in any adjustment can't possibly create seams.
It's also struck me that an alternative to using the stencil for a strict binary test is to pre-populate the z-buffer with the closest possible value on pixels you don't want redrawn; use the colour mask to draw to the depth buffer only.
You can split the scene and render it in parts - this way you will render with less memory consumption and you can simply skip unnecessary parts or regions. Also read this

Multiple render targets in one FBO with different size textures?

Can I have textures of different sizes attached to a single FBO, and then use those for multiple render targets? Will I need to do anything special with glViewport to make this happen? Suppose I have a 1024x1024 texture for COLOR_ATTACHMENT0 and a 512x512 texture for COLOR_ATTACHMENT1, and I call glDrawBuffers(2, {COLOR_ATTACHMENT0, COLOR_ATTACHMENT1}) (I realize that syntax is incorrect, but you get the idea...), will it render the full scene in both attachments? I'm chiefly thinking the utility of this would be the ability to render a scene at full quality and a down-sampled version at one go, perhaps with certain masks or whatever so it could be used in an effects compositor/post-processing. Many thanks!
Since GL3.0 you can actually attach textures of different sizes. But you must be aware that the rendered area will be the one of the smallest texture. Read here :
http://www.opengl.org/wiki/Framebuffer_Object

How can I create a buffer in (video) memory to draw to using OpenGL?

OpenGL uses two buffers, one is used to display on the screen, and the other is used to do rendering. They are swapped to avoid flickering. (Double buffering.)
Is it possible to create another 'buffer' in (I assume video memory), so that drawing can be done elsewhere. The reason I ask is that I have several SFML Windows, and I want to be able to instruct OpenGL to draw to an independent buffer for each of them. Currently I have no control over the rendering buffer. There is one for EDIT: ALL (not each) window. Once you call window.Display(), the contents of this buffer are copied to another buffer which appears inside a window. (I think that's how it works.)
The term you're looking for is "off-screen rendering". There are two methods to do this with OpenGL.
The one is by using a dedicated off-screen drawable provided by the underlying graphics layer of the operating system. This is called a PBuffer. A PBuffer can be used very much like a window, that's not mapped to the screen. PBuffers were the first robust method to implement off-screen rendering using OpenGL; they were introduced in 1998. Since PBuffers are fully featured drawables a OpenGL context can be attached to them.
The other method is using an off-screen render target provided by OpenGL itself and not by the operating system. This is called a Framebuffer Object. FBOs require a fully functional OpenGL context to work. But FBOs can not provide the drawable a OpenGL context requires to be attached to, to be functional. So the main use for FBOs is to render intermediate pictures to them, that are later used when rendering on screen visible pictures. Luckily for an FBO to work, the drawable the OpenGL context is bound to may be hidden. So you can use a regular window that's hidden from the user can be used.
If your desire is pure off-screen rendering, a PBuffer still is a very viable option, especially on GLX/X11 (Linux) where they're immediately available without having to tinker with extensions.
Look into Frame Buffer Objects (FBOs).
If you have a third buffer you lose the value of double buffering. Double buffer works because you are changing the pointer to the pixel array sent to the display device. If you include the third buffer you'll have to copy into each buffer.
I haven't worked with OpenGL in a while but wouldn't it serve better to render into a texture (bitmap). This lets each implementation of OpenGL choose how it want's to get that bitmap from memory into the video buffer for the appropriate region of the screen.

How to apply Image Processing to OpenGL?

Sorry if the question is too general, but what I mean is this; in OpenGL, before you perform a buffer swapping to make the buffer visible on the screen, there should be some function calls to perform some image processing. I mean, like blur the screen, twisting a portion of the screen, etc. or performing some interesting "touch up" like bloom, etc.
What are the keywords and sets of functions of OpenGL I should be looking for if I want to do what I have said above?
Since you can't, in general, read/write to the framebuffer in the same operation (other than simple blending), you need to render to textures using FBO:s (FrameBufferObject), then do various processing on those, then do the final pass onto the real framebuffer.
That's the main part you need to understand. Given that, you can sketch your "render tree" on paper, i.e. which parts of the scene goes where and what your effects are, their input and output data.
From there on, you just render one or more big quads covering the entire screen with specific fragment shader that perform your effect, using textures as input and one or more framebuffer objects as output.