Access different Fragment in Fragmentshader OpenGL - opengl

Can I access and change output values of another Fragment at a certain location in the Fragmentshader?
For example in the main() loop I process everything just like usualy and output the color with some value. But in adition to that I also want the fragment at position vec3(5,3,6) (in world coordinates) to have the same colour.
Now I already did some researche on the web on that. The OpenGL site says, the fragmentshader has one fragment as input and has one fragment as output, which doesnt sound very promising.
Also I know that all fragments are being processed in parallel. But maybe it is posible to say, if the fragment at this position has not been processed yet, write this color to it and take this fragment as already processed.
My be someone can explain if this is posible somehow and if not, why this is not a good idea. The best guess I would have is, to build this logic into the shader, it would have a very bad effect on the general performance.

My be someone can explain if this is posible somehow and if not, why this is not a good idea.
It's not a question of bad idea vs. good idea. It's simply not possible.
The closest you can get to this functionality is ARB_fragment_shader_interlock. Through its interlock and ordering guarantees, it allows limited interoperation. And that limitation is... it only allows interoperation for fragments that cover the same pixel/sample.
So even this functionality does not allow you to write to some other pixel.
The absolute best you can do is use SSBOs and atomic counters to have fragment shaders write what color values and "world coordinates" they would like to write to, then have a second process execute that buffer as either a rendering command or a compute shader to actually write that data.

As already pointed out in Nicol's answer, you can't write to additional fragments of a framebuffer surface in the fragment shader.
The description of your use case is not clear enough to tell what might work best. In the interest of brainstorming, the most direct approach that comes to mind is that you don't use a framebuffer draw surface at all, but output to an image instead.
If you bind a texture as an image, you can write to it in the fragment shader using the imageStore() built-in function. This function takes coordinates as one of the argument, so you can write to any pixel you want, as well as write multiple pixels from the same shader invocation.
Depending on what exactly you want to achieve, I could also imagine a hybrid approach, where your primary rendering still goes to a framebuffer, but you write additional pixel values to an image at the desired positions. Then, in a second rendering pass, you can combine the content of the image with the primary rendering. The combination could be done with blending if the math/logic is simple enough. If you need a more complex combination, you can use a texture as the framebuffer attachment of the initial pass, and then use the result of the rendering and the extra image as two inputs for the fragment shader of the combination pass.

Related

Is it possible to reprocess a fragment shader before it is drawn to the screen?

Is there a way to make the fragment shader pass through another fragment shader before it is drawn? As in the following example:
Consider that I want to draw a scene but only inside a shape, I can check in the shader
if the TexCoords of the fragment are inside the shape I want.
Pass 1: Bind post processing shader
Pass 2: Draw se scene
Pass 3: Bind default or disable post processing shader
Drawing without post processing shader
Drawing with post processing shader
I'm aware of the framebuffer, and it works, but it goes through a process of rendering the whole screen, and that can cost me performance in the future, especially considering that this post processing shader will be turned on, off and reset several times during the rendering of a frame
OpenGL does not recognize the idea of chaining shader stages in the manner you suggest. During any particular rendering operation, there is exactly one fragment shader active. Period.
Of course, OpenGL also does not care where your shader strings come from. It doesn't care if there's a single file on a disk with that text in it or not. All it cares about is that you pass text corresponding to valid GLSL to glShaderSource.
So if you like, you can manufacture a single shader from multiple conceptual "shaders". This can be as simple as just concatenating a bunch of file strings together (which glShaderSource can do for you, since it takes multiple strings), or it can be a complex operation where you recognize certain variables as interface variables and carefully synthesize a main function from these disparate pieces.
How you go about doing that is ultimately up to you.
Alternatively, you can take an "ubershader" approach. That is, put all of the possible post-processing stuff in one shader, and use uniform variables to tell whether or not a particular post-processing step is currently active.
Write a shader that does both things: calculates the colour, and discards the fragment if it's outside a certain shape. Render the scene with that shader.
Perhaps if you want to avoid wasting time processing pixels outside the shape, you can set the "scissor rectangle" to the bounding box of your shape, so OpenGL won't even run the shader for pixels outside that box.

Data from shader to main program

I'm working on a small program in OpenGL and realized I needed to retrieve some data from the geometry shader to the main program so I could handle mouse events.
Not much, just some specific square coordinates that are calculated in the geometry shader.
How should I do this? Should I use a small FBO or should I make all the calculations in the main program and then send them to the geometry shader?
Generally speaking, you should do as much computation as possible in the host program.
If you want to read back data from a shader, Google is your friend. Outputting to an FBO is possible, although you'll also need a nontrivial fragment shader. The best option is often to use an SSBO, although image load-store or transform feedback may be more appropriate depending on what you're trying to do.
The easiest way to do this, is to colorcode you values, you need to send to host and use glGetPixels method.
You need to render to a seperate framebuffer, to hide the calculation from the screen.
If you want to implement Hittesting on objects of your scene and are not GPU bound this is the way to go.

Relation between depth-only FBO and fragment shader

I’ve been wondering what happens when binding a depth-only FBO (only the GL_DEPTH_ATTACHMENT gets attached and glDrawBuffer(GL_NONE) is called) for the fragment shader part. Because any color is discarded:
does OpenGL simply process vertices the regular way, call the rasterizer, apply the fragment shader for rasterized fragments, but discard any result
or does it do smarter things, like process vertices until the optional geometry shader, then cut the fragment shader part and use a dummy fragment shader in order to discard useless color computations?
Because of vendor-implementation details, I guess it might vary, but I’d like to have a better idea about that topic.
In my experience, the fragment shader will still run even if it has no outputs. This can be used for example to draw shadow maps with punch-through alpha textures, using discard.
If it does have outputs (or more outputs then are bound), then they should just be ignored. I'd imagine that a smart driver could easily skip the fragment shader entirely if it doesn't contain any discard statements.
Also perhaps look into Separate Shader Objects (https://www.opengl.org/registry/specs/ARB/separate_shader_objects.txt). It allows you to disable the stages manually.
I've read (Though never personally tested) that a complete lack of a color buffer causes strange undefined behavior, as OpenGL implementations each had to ask this question in reverse: "What /should/ we make it do when there's no color buffer?" and have no official, commonly-used-across-all-implementations answer.
The official documentation carefully avoids mentioning this situation generally.
As such, it is just recommended that you simply... not do that, and instead always have a color buffer, even if you don't use it.

Modern OpenGL - how to render VBO part in different color

I have big VBO (100k+ triangles) with assigned colors (x,y,z,r,g,b) and I would like to render a few selected triangles in different color (for example, render in white triangles 10000-10007). Rendering only mentioned part of VBO isnt a problem but it would get rendered in original color.
Few solutions comes to my mind but they all sound VERY stupid:
Change the VBO part. Too much work and certainly not efficient (I would have to read and re-interpret bytes, store them, override, render and restore)
Add new uniform selectedColor and check in each fragment if it is not black (usless condition executed 100k times)
As above but add whole new shader just for this simple task, avoiding condition (with so many shaders one will get lost soon)
Any idea how achieve so "simple" task?
Why not just adjust your current shader. Just add a uniform variable for the color, and upload a new value when rendering the highlighted part of the VBO.
If your not-highlighted part of the VBO uses some computation to get the color, you might want to think of a way to incorporate the uniform such that it has an "identity" value. Meaning if you put in said identity value, you'd get the original color. An example would be to multiply the computed color by the alpha component of the uniform, then add the rgb part of the uniform: computedColor * uniform.a + uniform.rgb. If you put in the value (0,0,0,1) you'd get the computedColor. If you put in the value (r,g,b,0), you'd get the uniform color, and you can even 'blend' the two.
About the other suggested solutions. Editing the VBO indeed sounds stupid, I'd only ever do that if your data dynamically changes, for things like soft meshes or maybe particles, but even then there's probably a better way using transform feedbacks. Similar to my suggestion, but not quite, adding a new shader sounds unnecessary. The context switch of making the new program active is rather expensive, probably more so than the simple uniform example I suggested above (very very probably).
My point being that another solution would be to find some mathematical formula so you exploit it to behave like both the computed color and your replacement color (or replacement formula). This would eliminate the need to put in expensive branches or such in your shaders. Nor would you need to do expensive buffer editing or program switching.
Ideally I would draw another polygon over the selected ones, in this case you can even put some blending in it to give a better effect.
You could also generate a small 1D texture and send in one texture coordinate per vertex. Then you can change the texture data to modify the colors.
Depends how much control you need i guess.

GLSL Shaders: blending, primitive-specific behavior, and discarding a vertex

Criteria: I’m using OpenGL with shaders (GLSL) and trying to stay with modern techniques (e.g., trying to stay away from deprecated concepts).
My questions, in a very general sense--see below for more detail—are as follows:
Do shaders allow you to do custom blending that help eliminate z-order transparency issues found when using GL_BLEND?
Is there a way for a shader to know what type of primitive is being drawn without “manually” passing it some sort of flag?
Is there a way for a shader to “ignore” or “discard” a vertex (especially when drawing points)?
Background: My application draws points connected with lines in an ortho projection (vertices have varying depth in the projection). I’ve only recently started using shaders in the project (trying to get away from deprecated concepts). I understand that standard blending has ordering issues with alpha testing and depth testing: basically, if a “translucent” pixel at a higher z level is drawn first (thus blending with whatever colors were already drawn to that pixel at a lower z level), and an opaque object is then drawn at that pixel but at a lower z level, depth testing prevents changing the pixel that was already drawn for the “higher” z level, thus causing blending issues. To overcome this, you need to draw opaque items first, then translucent items in ascending z order. My gut feeling is that shaders wouldn’t provide an (efficient) way to change this behavior—am I wrong?
Further, for speed and convenience, I pass information for each vertex (along with a couple of uniform variables) to the shaders and they use the information to find a subset of the vertices that need special attention. Without doing a similar set of logic in the app itself (and slowing things down) I can’t know a priori what subset of vericies that is. Thus I send all vertices to the shader. However, when I draw “points” I’d like the shader to ignore all the vertices that aren’t in the subset it determines. I think I can get the effect by setting alpha to zero and using an alpha function in the GL context that will prevent drawing anything with alpha less than, say, 0.01. However, is there a better or more “correct” glsl way for a shader to say “just ignore this vertex”?
Do shaders allow you to do custom blending that help eliminate z-order transparency issues found when using GL_BLEND?
Sort of. If you have access to GL 4.x-class hardware (Radeon HD 5xxx or better, or GeForce 4xx or better), then you can perform order-independent transparency. Earlier versions have techniques like depth peeling, but they're quite expensive.
The GL 4.x-class version uses essentially a series of "linked lists" of transparent samples, which you do a full-screen pass to resolve into the final sample color. It's not free of course, but it isn't as expensive as other OIT methods. How expensive it would be for your case is uncertain; it is proportional to how many overlapping pixels you have.
You still have to draw opaque stuff first, and you have to draw transparent stuff using special shader code.
Is there a way for a shader to know what type of primitive is being drawn without “manually” passing it some sort of flag?
No.
Is there a way for a shader to “ignore” or “discard” a vertex (especially when drawing points)?
No in general, but yes for points. A Geometry shader can conditionally emit vertices, thus allowing you to discard any vertex for arbitrary reasons.
Discarding a vertex in non-point primitives is possible, but it will also affect the interpretation of that primitive. The reason it's simple for points is because a vertex is a primitive, while a vertex in a triangle isn't a whole primitive. You can discard lines, but discarding a vertex within a line is... of dubious value.
That being said, your explanation for why you want to do this is of dubious merit. You want to update vertex data with essentially a boolean value that says "do stuff with me" or not to. That means that, every frame, you have to modify your data to say which points should be rendered and which shouldn't.
The simplest and most efficient way to do this is to simply not render with them. That is, arrange your data so that the only thing on the GPU are the points you want to render. Thus, there's no need to do anything special at all. If you're going to be constantly updating your vertex data, then you're already condemned to dealing with streaming vertex data. So you may as well stream it in a way that makes rendering efficient.