How to apply post processing shader to part of a scene - opengl

I have the following rendering pipeline:
[Input texture that changes every frame] -> [3D content rendered on top of changing texture] ->
[Post processing shader] -> [Composited texture with post processing applied to input texture + 3D content].
I want to achieve this pipeline:
[Input texture that changes every frame] -> [3D content rendered on top of changing texture]+[Post processing shader applied to 3D content] -> [Composited texture with post processing only applied to 3D content].
What's the best way of achieving this?
I was thinking about rendering the 3D content to an offscreen texture (with transparent non-rendered parts), then applying the post processing shader to this texture, and then rendering the post processed texture with transparency on top of the background texture.
Is there any better way of doing this? If there isn't, how would one go about rendering on a texture with alpha and then compositing it over a previous texture?

If your post-processing is something simple like white-balancing or HDR, then you can first render the 3D content, apply the post-processing, and then render your background texture with depth-test enabled and at an appropriate maximum depth.
If your post-processing is more complicated, like a blur or something similar, then I'm afraid that rendering to a second texture is the only solution. Rendering into an RGBA texture is straightforward, just like into an RGB one. Just make sure that you use pre-multiplied alpha throughout your pipeline, as otherwise you won't be able to compose the textures correctly. Before rendering clear your texture with a (0,0,0,0) color.

Related

Rendering scene on texture necessary in order to do post processing?

Is it necessary to render a scene to a texture which is then being used on a quad, covering the whole frame in order to be able to do post processing stuff? Is it because otherwise you would not be able to have the rendered image as a whole because the shader program would automatically render the image on the screen without it being possible to be edited inbetween?
Is it necessary to render a scene to a texture which is then being used on a quad
Yes and no. Yes, you need to render the scene to a texture. But with Compute Shaders, you don't have to render the texture to a quad.
The reason why you need to render to a texture is that you usually need to fully rendered image for the post processing effect. But this is not possible in the first render pass since you don't have access to neighbor fragments and you also wouldn't see fragments that are written after the the current one.
As #Spektre noted in a comment, the second major reason why render to texture is needed is that the OpenGL pipeline can not read actual rendering target so we need to separate processing into passes so we can read what was rendered.

Direct3D11 Pixel Shader Interpolation

I'm trying to create a simple transition between two colors. So a render of a quad with top edge colors being #2c2b35 and bottom color being #1a191f. When I put it through simple pixel shader which just applies colors to the vertices and interpolates it I get an effect of color banding and I'm wondering if there is an option to eliminate that effect without some advanced pixel shader dithering techniques.
I'm attaching a picture of a render where the left part is being rendered in my app and part on the right is rendered in Photoshop with "Dithered" option checked.
Thanks in advance.
Banding is a problem that every rendering engine have to deal with at some point.
Before you start implementing dithering, you should first confirm you are following a proper srgb pipeline. (https://en.wikipedia.org/wiki/SRGB)
To be SRGB correct :
Swapchain and render surfaces in RGBX8 have to contains SRGB values. The simplest way is to create a render target view with a XXXX_UNORM_SRGB format.
Texture have to use a UNORM_SRGB format view when appropriate ( albedo map ) and UNORM if not appropriate ( normal map )
Shader computation like lighting has to be in linear space.
Texture and render target when using the right format will perform the conversion for you at read and write, but for constant, you have to do it manually. For example in #2c2b35 : 0x2c is in srgb 44(0.1725) but in linear value is 0.025
To do a gradient, you have to do it in linear space in the shader and let the render target convert it back to srgb on write.

Get pixel behind the current pixel

I'm coding a programm in C++ with glut, rendering a 3D model in a window.
I'm using glReadPixels to get the image of the scenery displayed in the windows.
And I would like to know how I can get, for a specific pixel (x, y), not directly its color but the color of the next object behind.
If I render a blue triangle, and a red triangle in front of it, glReadPixels gives me red colors from the red triangle.
I would like to know how I can get the colors from the blue triangle, the one I would get from glReadPixels if the red triangle wasn't here.
The default framebuffer only retains the topmost color. To get what you're suggesting would require a specific rendering pipeline.
For instance you could:
Create an offscreen framebuffer of the same dimensions as your target viewport
Render a depth-only pass to the offscreen framebuffer, storing the depth values in an attached texture
Re-render the scene with a special shader that only drew pixels where the post-transformation Z values was LESS than the value in the previously recorded depth buffer
The final result of the last render should be the original scene with the top layer stripped off.
Edit:
It would require only a small amount of new code to create the offscreen framebuffer and render a depth only version of the scene to it, and you could use your existing rendering pipeline in combination with that to execute steps 1 and 2.
However, I can't think of any way you could then re-render the scene to get the information you want in step 3 without a shader, because it both the standard depth test plus a test against the provided depth texture. That doesn't mean there isn't one, just that I'm not well versed in GL tricks to think of it.
I can think of other ways of trying to accomplish the same task for specific points on the screen by fiddling with the rendering system, but they're all far more convoluted than just writing a shader.

'Render to Texture' and multipass rendering

I'm implementing an algorithm about pencil rendering. First, I should render the model using Phong shading to determine the intensity. Then I should map the texture to the rendered result.
I'm going to do a multipass rendering with opengl and cg shaders. Someone told me that I should try 'render to texture'. But I don't know how to use this method to get the effects that I want. In my opinion, we should first use this method to render the mesh, then we can get a 2D texture about the whole scene. Now that we have draw content to the framebuffer, next we should render to the screen, right? But how to use the rendered texture and do some post-processing on it? Can anybody show me some code or links about it?
I made this tutorial, it might help you : http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-14-render-to-texture/
However, using RTT is overkill for what you're trying to do, I think. If you need the fragment's intensity in the texture, well, you already have it in your shader, so there is no need to render it twice...
Maybe this could be useful ? http://www.ozone3d.net/demos_projects/toon-snow.php
render to a texture with Phong shading
Draw that texture to the screen again in a full screen textured quad, applying a shader that does your desired operation.
I'll assume you need clarification on RTT and using it.
Essentially, your screen is a framebuffer (very similar to a texture); it's a 2D image at the end of the day. The idea of RTT is to capture that 2D image. To do this, the best way is to use a framebuffer object (FBO) (Google "framebuffer object", and click on the first link). From here, you have a 2D picture of your scene (you should check it by saving to an image file that it actually is what you want).
Once you have the image, you'll set up a 2D view and draw that image back onto the screen with an 800x600 quadrilateral or what-have-you. When drawing, you use a fragment program (shader), which transforms the brightness of the image into a greyscale value. You can output this, or you can use it as an offset to another, "pencil" texture.

Is it possible to save the current viewport and then re draw the saved viewport in OpenGL and C++ during the next draw cycle?

I want to know if I can save a bitmap of the current viewport in memory and then on the next draw cycle simply draw that memory to the viewport?
I'm plotting a lot of data points as a 2D scatter plot in a 256x256 area of the screen and I could in theory re render the entire plot each frame but in my case it would require me to store a lot of data points (50K-100K) most of which would be redundant as a 256x256 box only has ~65K pixels.
So instead of redrawing and rendering the entire scene at time t I want to take a snapshot of the scene at t-1 and draw that first, then I can draw updates on top of that.
Is this possible? If so how can I do it, I've looked around quite a bit for clues as to how to do this but I haven't been able to find anything that makes sense.
What you can do is render the scene into a texture and then first draw this texture (using a textured full-screen quad) before drawing the additional points. Using FBOs you can directly render into a texture without any data copies. If these are not supported, you can copy the current framebuffer (after drawing, of course) into a texture using glCopyTex(Sub)Image2D.
If you don't clear the framebuffer when rendering into the texture, it already contains the data of the previous frame and you just need to render the additional points. Then all you need to do to display it is drawing the texture. So you would do something like:
render additional points for time t into texture (that already contains the data of time t-1) using an FBO
display texture by rendering textured full-screen quad into display framebuffer
t = t+1 -> step 1.
You might even use the framebuffer_blit extension (which is core since OpenGL 3.0, I think) to copy the FBO data onto the screen framebuffer, which might even be faster than drawing the textured quad.
Without FBOs it would be something like this (requiring a data copy):
render texture containing data of time t-1 into display framebuffer
render additional points for time t on top of the texture
capture framebuffer into texture (using glCopyTexSubImage2D) for next loop
t = t+1 -> step 1
You can render to texture the heavy part. Then when rendering the scene, render that texture, and on top the changing things.