Reading current framebuffer - opengl

Is there a way to read fragment from the framebuffer currently rendered?
So, I'm looking for a way to read color information from the fragment that's on the place that current fragment will probably overwrite. So, exact position of the fragment that previously rendered.
I found gl_FragData and gl_LastFragData to be added with certain EXT_ extensions to shaders, but if they are what I need, could somebody explain how to use those?
I am looking either for a OpenGL or OpenGL ES 2.0 solution.
EDIT:
All the time I was searching for the solution that would allow me to have some kind of read&write "uniform" accessible from shaders. For anyone out there searching for similar thing, OpenGL version 4.3+ support image and buffer storage types. They do allow both reading and writing to them simultaneously, and in combination with compute shaders they proved to be very powerful tool.

Your question seems rather confused.
Part of your question (the first sentence) asks if you can read from the framebuffer in the fragment shader. The answer is, generally no. There is an OpenGL ES 2.0 extension that lets you do so, but it's only supported on some hardware. In desktop GL 4.2+, you can use arbitrary image load/store to get the same effect. But you can't render to that image anymore; you have to write your data using image storing functions.
gl_LastFragData is pretty simple: it's the color from the sample in the framebuffer that will be overwritten by this fragment shader. You can do with it what you wish, if it is available.
The second part of your question (the second paragraph) is a completely different question. There, you're asking about fragments that were potentially never written to the framebuffer. You can't read from a fragment shader; you can only read images. And if a fragment fails the depth test, then it's data was never rendered to an image. So you can't read it.

With most nVidia hardware you can use the GL_NV_texture_barrier extension to read from a texture that's currently bound to a framebuffer. But bear in mind that you won't be able to read data any more recent than produced in the previous draw call

Related

Writing to depth buffer from opengl compute shader

Generally on modern desktop OpenGL hardware what is the best way to fill a depth buffer from a compute shader and then use that depth buffer for graphics pipeline rendering with triangles etc?
Specifically I am wondering about concerns regards HiZ. Also I wonder if it's better to do compute shader modifications to the depth buffer before or after the graphics rendering?
If the compute shader is run after the graphics rendering I assume the depth buffer will typically be decompressed behind the scenes. But I worry done the other way around the depth buffer may be in a decompressed/non-optimal state for the graphics pipeline?
As far as i know, you cannot bind textures with any of the depth formats as images, and thus cannot write to depth format textures in compute shaders. See glBindImageTexture documentation, it lists the formats that your texture format must be compatible to. Depth formats are not among them and the specification says the depth formats are not compatible to the normal formats.
Texture copying functions have the same compatibility restrictions, so you can't even e.g. write to a normal texture in the compute shader and then copy to a depth texture. glCopyImageSubData does not explicitly have that restriction but i haven't tried it and it's not part of the core profile anymore.
What might work is writing to a normal texture, then rendering a fullscreen triangle and setting gl_FragDepth to values read from the texture, but that's an additional fullscreen pass.
I don't quite understand your second question - if your compute shader stuff modifies the depth buffer, the result will most likely be different depending on whether you do it before or after regular rendering because different parts will be visible or occluded.
But maybe that question is moot since it seems you cannot manually write into depth buffers at all - which might also answer your third question - by not writing into depth buffers you cannot mess with the compression of it :)
Please note that i'm no expert in this, i had a similar problem and looked at the docs/spec myself, so this all might be wrong :) Please let me know if you manage to write to depth buffers with compute shaders!

Line drawing with GLSL shader, how to do it in fragment shader?

I've read some papers and it says like they can detect silhouette,edge,ridge and draw a line to it using GLSL shader. But in the implementation they says that they 'accessed' neighbouring pixel and do something. How can that even possible?
This is the paper in question
http://www.google.co.th/url?sa=t&rct=j&q=&esrc=s&source=web&cd=3&ved=0CDEQFjAC&url=http%3A%2F%2Fcg.postech.ac.kr%2Fresearch%2Fline_drawings_via_abstracted_shading%2Fline-drawing-s07.pdf&ei=I5GeUO-pMcKzrAf8_4DICw&usg=AFQjCNE7D9nMVKWvYwvNUaHo5S1ZfrG10A&sig2=CmnD6hbD6-0EYkvv-Bj3LQ
It's on section 3, Rendering Lines. They initially said about GLSL shader but then they suddenly talk about sample the groups of pixel.
I'm studying about non-photorealistic rendering without image processing after I render it. So GPU usage can be optimum if it was done in GLSL shaders.
Without reading the paper, I assume they probably mean gather reads from a texture, not scatter writes to the framebuffer, which has always been possible with shaders. Since OpenGL-4 it's even possible to do scatter writes from a shader, called image writing, but it's rather slow. Anyway, for line detection you only need gather reads, so this is not a problem.

GLSL How to retrieve vertex position after a shader process it?

I wrote a program that simulates soft bodies using springs. It looks nice but the problem is it consumes a lot of CPU time. So I can not run it on my laptop or any not high end PC.
I thought It would be a good idea to write a vertex shader and move the logic to the GPU. I've read some tutorials and made a toon shader so I thought (wrong) I was ready to go.
The big problem I have is that I need to know the old position of a vertex to calculate the new one. I don't know how could I retrieve a vertex position so I could send it back to the shaders each frame?
I'm not really sure is this even possible to do and maybe I'm trying to do something that shaders are never meant to do. I am still researching but I thought I could ask an see if maybe someone can help.
You can use the transform feedback mechanism if your hardware supports OpenGL 3.0 or above. There are also other techniques for getting the vertex position back, like carefully arranging your rendering so that you're writing a triangle (or point primitive) to each separate pixel on the screen. This is fairly difficult, and you need to render to a floating-point buffer, which requires FBO support.

How does multisample really work?

I am very interested in understanding how multisampling works. I have found a large literature on how to enable or use it, but very little information concerning what it really does in order to achieve an antialiased rendering. What I have found, in many places, is conflicting information that only confused me more.
Please note that I know how to enable and use multisampling (I actually already use it), what I don't know is what kind of data really gets into the multisampled renderbuffers/textures, and how this data is used in the rendering pipeline.
I can understand very well how supersampling works, but multisampling still has some obscure areas that I would like to understand.
here is what the specs say: (OpenGL 4.2)
Pixel sample values, including color, depth, and stencil values, are stored in this
buffer (the multisample buffer). Samples contain separate color values for each fragment color.
...
During multisample rendering the contents of a pixel fragment are changed
in two ways. First, each fragment includes a coverage value with SAMPLES bits.
...
Second, each fragment includes SAMPLES depth values and sets of associated
data, instead of the single depth value and set of associated data that is maintained
in single-sample rendering mode.
So, each sample contains a distinct color, coverage bit, and depth. What's the difference from a normal supersampling? Seems like a "weighted" supersampling to me, where each final pixel value is determined by the coverage value of its samples instead of a simple average, but I am very unsure about this. And what about texture coordinates at sample level?
If I store, say, normals in a RGBF multisampled texture, will I read them back "antialiased" (that is, approaching 0) on the edges of a polygon?
A fragment shader is called once per fragment, unless it uses gl_SampleID, glSampleIn or has a 'sample' storage qualifier. How can a fragment shader be invoked once per fragment and get an antialiased rendering?
OpenGL on Silicon Graphics Systems:
http://www-f9.ijs.si/~matevz/docs/007-2392-003/sgi_html/ch09.html#LE68984-PARENT
mentions: When you use multisampling and read back color, you get the resolved color value (that is, the average of the samples). When you read back stencil or depth, you typically get back a single sample value rather than the average. This sample value is typically the one closest to the center of the pixel.
And there's this technical spec (1994) from the OpenGL site. It explains in full detail what is done If MULTISAMPLE_SGIS is enabled: http://opengl.org/registry/specs/SGIS/multisample.txt
See also this related question: How are depth values resolved in OpenGL textures when multisampling?
And the answers to this question, where GL_MULTISAMPLE_ARB is recommended: where is GL_MULTISAMPLE defined?. The specs for GL_MULTISAMPLE_ARB (2002) are here: http://www.opengl.org/registry/specs/ARB/multisample.txt

Shader framebuffer readback

I was wondering if there is support in the newer shader models to read-back a pixel value from the target framebuffer. I assume that this is alrdy done in later (non-programmable) stages in the drawing pipeline which made me hope that this feature might have been added into the programmable pipeline.
I am aware that it is possible to draw to a texture bound framebuffer and then send this texture to the shader, I was just hoping for a more elegant way to achieve the same functionality.
As Andrew notes, the framebuffer access is logically a separate stage from the fragment shader, so reading the framebuffer in the fragment shader is impossible. The reason for this (to answer Andrew's question) is a combination of performance and the ordering requirements of the graphics pipeline. The way the rendering pipeline is defined, framebuffer blending operations MUST occur in the same order as the triangles/primitives that went into the beginning of the pipeline. The fragment shaders, on the other hand, can happen in any order. So by having them be separate stages, the GPU is free to run fragment shaders as fast as it can, as their inputs become available, without having to synchronize between them. As long as it maintains enough bufffer space to hold on to the outputs of the fragment shaders, so that they can be accumulated and allow the framebuffer blends and writes to occur in order, all is well, as the results of any given fragment shader are not visible until after the blending stage.
If there was a way for the fragment shader to read the framebuffer, it would require some sort of synchronization to ensure that those reads happen in order, thus greatly slowing things down.
No. As you mention, rendering to a texture is the way to achieve that functionality.
If you take a look at a block diagram of a GPU pipeline, you'll see that the blending stage - which is what combines fragment shader output with the framebuffer - is separate from the fragment shader and is fixed-function.
I'm not a GPU designer - so I can only speculate the reason for this. Presumably it is to keep framebuffer access fast and insulate the fragment shader stage from the frame buffer so that it can be better parallelised. There are probably also issues regarding multi-sampling, and so on.
(Not to mention that fixed-function blending is "good enough" in most cases.)
Actually I think this is now doable with Direct3D 11 SM 5.0 (I didn't test it though).
You can bind an UAV to a PS 5.0, for allowing read and write operations on it using method OMSetRenderTargetsAndUnorderedAccessViews.
In that case the backbuffer of the swap chain in which you render has to be created with flag DXGI_USAGE_UNORDERED_ACCESS (I guess).
This is used in DXSDK OIT11 sample.
It is possible to read back the contents of the frame buffer in the fragment shader with Shader_framebuffer_fetch extension. The support can be added to the GPU with some performance loss. In fact, these days I'm working on to add the support of this extension in the OpenGL ES2.0 driver of a well known GPU brand in the consumer electronics market.
You can draw to a texture TEX (using a render target view) and then bind that as an input to another shader (using a shader resource view). TEX is then a pseduo-framebuffer.