How to apply Image Processing to OpenGL? - opengl

Sorry if the question is too general, but what I mean is this; in OpenGL, before you perform a buffer swapping to make the buffer visible on the screen, there should be some function calls to perform some image processing. I mean, like blur the screen, twisting a portion of the screen, etc. or performing some interesting "touch up" like bloom, etc.
What are the keywords and sets of functions of OpenGL I should be looking for if I want to do what I have said above?

Since you can't, in general, read/write to the framebuffer in the same operation (other than simple blending), you need to render to textures using FBO:s (FrameBufferObject), then do various processing on those, then do the final pass onto the real framebuffer.
That's the main part you need to understand. Given that, you can sketch your "render tree" on paper, i.e. which parts of the scene goes where and what your effects are, their input and output data.
From there on, you just render one or more big quads covering the entire screen with specific fragment shader that perform your effect, using textures as input and one or more framebuffer objects as output.

Related

Can you apply transformation matrices after running your pixel shaders?

I'm working with images, and I was tasked to extend the amount of image post-processing effects that we can perform on our images. Certain required effects need pixel data for calculations, so I created a few pixel shaders to do the job, and they work fine.
The problem is that the images need to be transformable, i.e. they need to be able to rotate, zoom in and out, pan, etc. The creation of all these textures, the algorithms to do the post-processing, they're all slowing the program down. I need a way to do these transformations without completely re-doing every effect. Some of the images the program works on are multi-gigabyte images, so I can't really do the obvious thing of caching the images after transformations for later use.
I'm looking for some sort of reasonable solution here. I'm not a graphics guy, but I can't imagine that similar programs with post-processing redo the post processing every time you pan. My best guess is saving off the last texture and applying the transformations on that, but I don't really know how to do that.
By saying "images" I assume you mean 2D textures you load and apply some post-pro effects. If that's the case just create a render target and render to that with all the post-effects.
Then rotate/pan a quad with that texture attached (a simplistic texture-fetching fragment shader will be required). Rerender that texture in case the post-pro parameters change.
If, on the other hand, you have a 3D scene, then there is no going around it, you have to render it each frame.
If my assumptions are wrong, it would be best if you provided more details on your case.

Create "Union" of two masking images in OpenGL

for a current 2D project I am rendering different objects on a scene.
On top of this I render images which have a cut out part, for example a transparent circle on a black image. When moving the cut-out circle, this creates the effect that of course only the within the transparent part, the background objects are visible.
Now I want to add a second masking layer with a different transparent shape on it and create a union of these two, showing the background images underneath each of the transparent parts.
The following images show an example illustration:
Background objects
Masking image 1
Masking image 2
Desired Result
For rendering, I am using libgdx with OpenGL 2.0 and scene2d as scenegraph. Basically, the background objects are added as actors onto a stage and then another Group-object rendering the masks.
Now I've tried by setting the Blending-function while rendering the masks, but I can't figure out if its possible to "unionize" the alpha values of each mask. But is that even possible?
I've though about using stencil buffers but I can't get this to work yet. I would be thankful if anybody could give me an approach on how to achieve this effect. Also, using stencil buffers would result in a pretty chopped of edge as the mask is either 0 or 1, correct?
A potential approach could be to use render-to-texture and compositing manually. I'm saying "potential", because there's hardly one best way here. Using built-in blending modes can certainly have some performance gains, but it limits you to the provided blending function parameters. While certainly doable with stuff like rendering the mask to the framebuffer alpha channel, and then using that with GL_DST_ALPHA/GL_ONE_MINUS_DST_ALPHA, it gets tricky once your layout gets more complex.
Render-to-texture, OTOH, has no such drawback. You're taking the control of the entire compositing function and have the freedom to do whatever processing you wish. To elaborate a bit, the rendering would work like this:
Set up a texture for the objects, and render your objects to it.
Set up a texture for the mask - this could be e.g. one-channel 8-bit. Retarget the rendering to it, and render the mask with a shader that outputs the mask value.
If you want to add another mask, you can either render more stuff to the same mask texture, or create yet another one.
Crucially, it doesn't matter which order the above operations are done, because they're completely separate and don't impact each other; in fact, if the mask doesn't change, you don't even need to re-render it.
Render a full-screen quad with your compositing shader, taking those two textures as inputs (uniforms).
So, to sum up, render-to-texture is a bit more flexible in terms of the compositing operation, gives you a way to do other post-effects like motiong blur, and gives you more leeway in the order of operations. OTOH, it imposes a certain limit on the number of textures or passes, uses more memory (since you'll be keeping the intermediate textures in, as opposed to just working on one framebuffer), and might have a performance penalty.
If you decide to stick to the built-in blending, it gets a bit trickier. Typically you'll want to have alpha 0 as "no image", and 1 as "all image", but in this case it might be better to think about it as a mask, where 0 is "no mask" and 1 is "full mask". Then, the blend func for the mask could simply be GL_ONE/GL_ONE, and for the final image GL_ZERO/GL_ONE_MINUS_DST_ALPHA. That certainly restricts your ability to actually do blending and masking at the same time.
There exists a function called glBlendFuncSeparate that might make it a bit more flexible, but that's still not gonna give you as many possibilities as the method mentioned above.
Alternatively, actually learning how to set up stencil buffer would solve that specific issue, since the stencil buffer is made with specifically this use in mind. There's a lot of tutorials online, but basically it amounts to a few calls of glStencil(Op|Func|Mask), optionally with disabling the writes to the color buffer with glColorMask.

Displaying a framebuffer in OpenGL

I've been learning a bit of OpenGL lately, and I just got to the Framebuffers.
So by my current understanding, if you have a framebuffer of your own, and you want to draw the color buffer onto the window, you'll need to first draw a quad, and then wrap the texture over it? Is that right? Or is there something like glDrawArrays(), glDrawElements() version for framebuffers?
It seems a bit... Odd (clunky? Hackish?) to me that you have to wrap a texture over a quad in order to draw the framebuffer. This doesn't have to be done with the default framebuffer. Or is that done behind your back?
Well. The main point of framebuffer objects is to render scenes to buffers that will not get displayed but rather reused somewhere, as a source of data for some other operation (shadow maps, High dynamic range processing, reflections, portals...).
If you want to display it, why do you use a custom framebuffer in the first place?
Now, as #CoffeeandCode comments, there is indeed a glBlitFramebuffer call to allow transfering pixels from one framebuffer to another. But before you go ahead and use that call, ask yourself why you need that extra step. It's not a free operation...

Partially render a 3D scene

I want to partially render a 3D scene, by this I mean I want to render some pixels and skip others. There are many non-realtime renderers that allow selecting a section that you want to render.
Example, fully rendered image (all pixels rendered) vs partially rendered:
I want to make the renderer not render part of a scene, in this case the renderer would just skip rendering these areas and save resources (memory/CPU).
If it's not possible to do in OpenGL, can someone suggest any other open source renderer, it could even be a software renderer.
If you're talking about rendering rectangular subportions of a display, you'd use glViewport and adjust your projection appropriately.
If you want to decide whether to render or not per pixel, especially with the purely fixed pipeline, you'd likely use a stencil buffer. That does exactly much the name says — you paint as though spraying through a stencil. It's a per-pixel mask, reliably at least 8 bits per pixel, and has supported in hardware for at least the last fifteen years. Amongst other uses, it used to be how you could render a stipple without paying for the 'professional' cards that officially supported glStipple.
With GLSL there is also the discard statement that immediately ends processing of a fragment and produces no output. The main caveat is that on some GPUs — especially embedded GPUs — the advice is to prefer returning any colour with an alpha of 0 (assuming that will have no effect according to your blend mode) if you can avoid a conditional by doing so. Conditionals and discards otherwise can have a strong negative effect on parallelism as fragment shaders are usually implemented by SIMD units doing multiple pixels simultaneously, so any time that a shader program look like they might diverge there can be a [potentially unnecessary] splitting of tasks. Very GPU dependent stuff though, so be sure to profile in real life.
EDIT: as pointed out in the comments, using a scissor rectangle would be smarter than adjusting the viewport. That both means you don't have to adjust your projection and, equally, that rounding errors in any adjustment can't possibly create seams.
It's also struck me that an alternative to using the stencil for a strict binary test is to pre-populate the z-buffer with the closest possible value on pixels you don't want redrawn; use the colour mask to draw to the depth buffer only.
You can split the scene and render it in parts - this way you will render with less memory consumption and you can simply skip unnecessary parts or regions. Also read this

Mouse-picking using off-screen rendering?

I have 3d-scene with a lot of simple objects (may be huge number of them), so I think it's not very good idea to use ray-tracing for picking objects by mouse.
I'd like to do something like this:
render all these objects into some opengl off-screen buffer, using pointer to current object instead of his color
render the same scene onto the screen, using real colors
when user picks a point with (x,y) screen coordinates, I take the value from the off-screen buffer (from corresponding position) and have a pointer to object
Is it possible? If yes- what type of buffer can I choose for "drawing with pointers"?
I suppose you can render in two passes. First to a buffer or a texture data you need for picking and then on the second pass the data displayed. I am not really familiar with OGL but in DirectX you can do it like this: http://www.two-kings.de/tutorials/dxgraphics/dxgraphics16.html. You could then find a way to analyse the texture. Keep in mind that you are rendering data twice, which will not necessarily double your render time (as you do not need to apply all your shaders and effects) bud it will be increased quite a lot. Also per each frame you are essentially sending at least 2MB of data (if you go for 1byte per pixel on 2K monitor) from GPU to CPU but that might change if you have more than 256 objects on screen.
Edit: Here is how to do the same with OGL although I cannot verify that the tutorial is correct: http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-14-render-to-texture/ (There is also many more if you look around on Google)