Rendering scene on texture necessary in order to do post processing? - opengl

Is it necessary to render a scene to a texture which is then being used on a quad, covering the whole frame in order to be able to do post processing stuff? Is it because otherwise you would not be able to have the rendered image as a whole because the shader program would automatically render the image on the screen without it being possible to be edited inbetween?

Is it necessary to render a scene to a texture which is then being used on a quad
Yes and no. Yes, you need to render the scene to a texture. But with Compute Shaders, you don't have to render the texture to a quad.
The reason why you need to render to a texture is that you usually need to fully rendered image for the post processing effect. But this is not possible in the first render pass since you don't have access to neighbor fragments and you also wouldn't see fragments that are written after the the current one.
As #Spektre noted in a comment, the second major reason why render to texture is needed is that the OpenGL pipeline can not read actual rendering target so we need to separate processing into passes so we can read what was rendered.

Related

OpenGL rendering to FBO

Is it possible to render to a FBO with render calls that use fbos themselves?
for instance here is a bit of pseudo code.
Bind (top level FBO)
render water <-- (generate and use own sub fbos)
render shadows <-- (generate and use sub fbos)
render regular scene
etc..
unbind (top level FBO)
Blur Top level FBO, bloom,
render final scene to a quad using the top level FBO generated texture. I'm interested in doing post processing like bloom to my final game scene.
If I get your question right you want to compose a final scene from different rendering results,right?So first,this is completely possible.You can reserve an FBO per effect if you want.But your pseudo-code lack efficiency and would impact performance.No need to create sub-FBOs in runtime all the time.It is expensive operation.If you are after a pipeline with post-processing stage you would usually need no more than 2 FBOs (offscreen).Also remember you always have the default FBOs (front,back,left,right) which are created by the context.So you can render your 3D stuff into FBO -1 than use its texture as source for FBO-2 to apply post-processing effects.Then blit the results into the default screen FBO.
I don't see a reason to create FBO per effect.The execution is still serial.That's, you render effect after effect so you can reuse the same FBO again and again.Also,you may consider,instead of multiple FBOs use multiple render buffers or texture attachments of one FBO and decide into which of those you want to render your stuff.

How to apply a vertex shader to all vertices in a scene in OpenGL?

I'm working on a small engine in OpenTK right now, and I've got shaders working so far. I wonder though , how it is possible to apply a shader to an entire scene!?. I've seen this done in minecraft for example, where someone created a shader that warped the entire scene. But since every object is rendered with its own shader active, how would I achieve this?
You seem to be referring to a technique called post processing. The way it works is that you first render the entire scene to a texture using the shaders you already have. You can then render this texture to the screen using a fragment shader to apply various effects like motion blur, warping or depth of field.
"But since every object is rendered with its own shader active"
That's not how OpenGL works. In fact there's no such thing as "models" (what you probably mean by "object") in OpenGL. OpenGL draws primitives (points, lines and triangles) one at a time. Furthermore there's no hard association between a set of primitives and the shaders being used.
It's trivial to just bind a single shader program at the beginning of a batch and every primitive of that batch is subjected to this shader. If the batch consists of the whole scene, then the whole scene uses that shader.
AFAIK, you can only bind one vertex shader at a time.
What you may want to try is to render to a texture first then rerender the texture onto the screen but applying some changes to it (warping it for example). You can also extract the depth buffer and use it if you have a more complex change that you want to apply.
If you bind the shader you want before the render loop, it would effect all items until you un-bind it (i.e. binding id #0) or disable GL_TEXTURE_2D via glEnable()/glDisable().

Is it possible to save the current viewport and then re draw the saved viewport in OpenGL and C++ during the next draw cycle?

I want to know if I can save a bitmap of the current viewport in memory and then on the next draw cycle simply draw that memory to the viewport?
I'm plotting a lot of data points as a 2D scatter plot in a 256x256 area of the screen and I could in theory re render the entire plot each frame but in my case it would require me to store a lot of data points (50K-100K) most of which would be redundant as a 256x256 box only has ~65K pixels.
So instead of redrawing and rendering the entire scene at time t I want to take a snapshot of the scene at t-1 and draw that first, then I can draw updates on top of that.
Is this possible? If so how can I do it, I've looked around quite a bit for clues as to how to do this but I haven't been able to find anything that makes sense.
What you can do is render the scene into a texture and then first draw this texture (using a textured full-screen quad) before drawing the additional points. Using FBOs you can directly render into a texture without any data copies. If these are not supported, you can copy the current framebuffer (after drawing, of course) into a texture using glCopyTex(Sub)Image2D.
If you don't clear the framebuffer when rendering into the texture, it already contains the data of the previous frame and you just need to render the additional points. Then all you need to do to display it is drawing the texture. So you would do something like:
render additional points for time t into texture (that already contains the data of time t-1) using an FBO
display texture by rendering textured full-screen quad into display framebuffer
t = t+1 -> step 1.
You might even use the framebuffer_blit extension (which is core since OpenGL 3.0, I think) to copy the FBO data onto the screen framebuffer, which might even be faster than drawing the textured quad.
Without FBOs it would be something like this (requiring a data copy):
render texture containing data of time t-1 into display framebuffer
render additional points for time t on top of the texture
capture framebuffer into texture (using glCopyTexSubImage2D) for next loop
t = t+1 -> step 1
You can render to texture the heavy part. Then when rendering the scene, render that texture, and on top the changing things.

Applying a shader to framebuffer object to get fisheye affect

Lets say i have an application ( the details of the application should be irrelevent for solving the problem ). Instead of rendering to the screen, i am somehow able to force the application to render to a framebuffer object instead of rendering to the screen ( messing with glew or intercepting a call in a dll ).
Once the application has rendered its content to the FBO is it possible to apply a shader to the contents of the FB? My knowledge is limited here, so from what i understand at this stage all information about vertices is no longer available and all the necessary tests have been applied, so whats left in the buffer is just pixel data. Is this correct?
If it is possible to apply a shader to the FBO, is is possible to get a fisheye affect? ( like this for example: http://idea.hosting.lv/a/gfx/quakeshots.html )
The technique used in the linke above is to create 6 different viewports and render each viewport to a cubemap face and then apply the texture to a mesh.
Thanks
A framebuffer object encapsulates several other buffers, specifically those that are implicitly indexed by fragment location. So a single framebuffer object may bundle together a colour buffer, a depth buffer, a stencil buffer and a bunch of others. The individual buffers are known as renderbuffers.
You're right — there's no geometry in there. For the purposes of reading back the scene you get only final fragment values, which if you're highjacking an existing app will probably be a 2d pixel image of the frame and some other things that you don't care about.
If your GPU has render-to-texture support (originally an extension circa OpenGL 1.3 but you'd be hard pressed to find a GPU without it nowadays, even in mobile phones) then you can link a texture as a renderbuffer within a framebuffer. So the rendering code is exactly as it would be normally but ends up writing the results to a texture that you can then use as a source for drawing.
Fragment shaders can programmatically decide which location of a texture map to sample in order to create their output. So you can write a fragment shader that applies a fisheye lens, though you're restricted to the field of view rendered in the original texture, obviously. Which would probably be what you'd get in your Quake example if you had just one of the sides of the cube available rather than six.
In summary: the answer is 'yes' to all of your questions. There's a brief introduction to framebuffer objects here.
Look here for some relevant info:
http://www.opengl.org/wiki/Framebuffer_Object
The short, simple explanation is that a FBO is the 3D equivalent of a software frame buffer. You have direct access to individual pixels, instead of having to modify a texture and upload it. You can get shaders to point to an FBO. The link above gives an overview of the procedure.

Getting normal information from OpenGL render output

I'll try to keep this simple.
I want a way to access the normal information of the scene, from the Frame Buffer output (or similar). The same way one is able to access the Depth Buffer using glGetTexImage and GL_DEPTH_COMPONENT.
I know I could set up a fragment shader which outputs the normal information in RGB color space, which could in turn be read from the rendered image. I'm wondering however if there is a way to do this within the openGL API.
I'll clarify anything upon request as best as I can,
Thank you
You already know the solution: Render the normal as RGB. There's no built-in normal buffer you could use. If you don't want to render your scene twice, use framebuffer objects (FBO) with multiple render targets (MRT). Then you can write both color and normal into separate textures in your fragment shader.