How do I write dithering transperancy shader with grade value? - glsl

I need to write a glsl shader that uses dithering for transperancy.
I also should be able to give a grade of transperancy.
So 0.1 very transperant and 0.9 almost opaque.
But I don't know how I would write it.
I know how to write a 0.5 transperancy dithering shader, but I don't know how to write one which accepts an argument that tells it how transparent it hast to be.

Related

Any way to obtain percentage coverage of fragment (pixel) by primitive in hlsl/glsl fragment shader?

When the rasterizer invokes on primitive it split it into the collection of fragments (pixels). Next, the fragment shader called for every obtained pixel. Is there any way for me to have additional float parameter in my fragment shader, that will store information about how much the exact pixel is covered by the source primitive? This should have non-trivial value from 0-1 on triangle border pixels. Obviously it will be 1 on every "inside" triangle pixel.
I want rasterizer calculate and pass this value for me.
I thoight the "coservative rasterization" could help with that, but as I understand it uses for slightly different tasks (mostly for collision detection).
Also, as I understand there is no build-in method to do that. May be I can change the rasterized nature to do this? Is it possible?
When rendering to a multisampled framebuffer, you can look at the gl_SampleMaskIn[] bitmask array in the fragment shader to detect how many samples will be covered by the current fragment. This is about as close as you're going to get, and it's not great for what you want.
Obviously, it has the limitation of having the same granularity as the sample locations within a pixel. But the full mask also may be fewer than the number of samples in the framebuffer. If the renderer decides to generate multiple fragments per-pixel during multisample rasterization, the sample mask that any such fragments will only be for the samples that this particular fragment will write.
So if you have a 16-sample multisample framebuffer, the implementation may generate 4 fragments per-pixel, each covering a distinct set of 4 samples. So the sample bitmask for a fragment will never have more than 4 bits, even though you asked for 16x multisample rendering. And there's basically nothing you can do to detect if this is happening (outside of doing tests on specific hardware). All of this is implementation-defined.
Basically, what you want isn't really available; gl_SampleMask is the closest you can get, and how useful it is will be very implementation-dependent.
Maybe one could use GL_POLYGON_SMOOTH somehow for this, since as far as I understand it does exactly this, calculate the coverage of the current fragment and then modulates the fragment's alpha based on this

Multisampling, how to read back "unique" texels

I am looking at how i am going to implement antialiasing in a deferred lighting renderer. So three passes, a geometry pass, a lighting accumulation pass, and then a 2nd geometry pass for shading.
With normal multisampling, MSAA, the goal is to only multisample pixels on polygon edges. And for each triangle only write the result of the fragment shader to the subpixels which it covers. But ofcourse it is a known problem that this is a little problematic with deferred lighting.
The goal is to avoid evaluating all the subpixels in the 2nd and 3th pass, since that would basically be supersampling. If anybody knows another (better/possible) way of achieving that, I would very much like to hear it. But here is my idea:
If you can make the fragment shader in the first pass only write to the first subpixel the triangle covers. It allowes you to ignore unwritten texels in the lighting pass. And then finally in the 2nd geometry pass, somehow read the back only the first subpixel that the triangle matches, which is the one we wrote to originally and then did lighting for (and now write to all of the covered texels as normal so the result can be resolved). This way only the "unique" texels will be evaluated in the 2nd and 3th pass.
Can somebody say how this can be done in glsl (or confirm it is not possible)? I do not really see a reason why this would theoretically not be possible, but also do not see any way to do it in glsl.
For a moment, I'm going to ignore the goal of your question and instead focus on the specific request:
Can you write to the "first" sample only from a fragment shader?
Yes. What you have to do is have your fragment shader declare an input integer array using the decoration SampleMask (or, in GLSL parlance, use gl_SampleMaskIn, an array of signed integers). You would then iterate through this array bit-by-bit, to find the first bit that is set.
This bit is the "first sample". So you then declare an output integer array using the decoration SampleMask (in GLSL parlance, gl_SampleMask, an array of signed integers). You set the "first sample" bit to 1 and all others to zero.
Can you know what the "first sample" that was written is for a particular pixel in a multisample image?
Not unless you write that data to some other piece of memory, like an SSBO or something. The multisample image does not know which samples have been written to, so it has no way to know which is first.
And even if you could:
Your whole idea will not work.
Multisampling is just supersampling based on a single simplifying assumption. Namely, that it is OK to give all of the samples generated by a triangle the same per-fragment values (except for depth). In all other respects, it is just supersampling: adding more samples per-pixel.
If two triangles overlap, then your "first sample" approach is meaningless. Why? Because there are two "first samples": the first sample from triangle 1 and the first sample from triangle 2. And triangle 2 may have overwritten the "first sample" from triangle 1.
Even if there was no overwriting of a first sample, you still don't know how many samples each triangle contributed. If one triangle contributed the right 50% of the pixel's samples, and an overlapping triangle contributed the bottom 50% of the pixel's samples, then you should only get 25% of the first triangle's contribution. How do you know to do that with your method?

openGL how to draw using unsigned int vertex data

I'm doing a simple openGL program that involves rendering to a depth texture offscreen. However I'm dealing with large depths that exceed what can be represented by a float's precision. As a result I need to use unsigned int for drawing my points. I run into two issues when I try to implement this.
1) Whenever I attempt to draw a VBO that uses unsigned int (screen coordinates) for drawing it doesn't fall within the -1 to 1 range so none of them draw to the screen. The only way I can find to fix this problem is by using a orthographic projection matrix to adjust it to draw to screen coordinates.
Is this understanding correct or is there an easier way to do it.
If it is correct how do you properly implement this for what I want.
2) Secondly when drawing this way is there any way to preserve the initial values (not converting them to floats when drawing) so they are no different when you read them back again, this is necessary because my objective is to create a depth buffer of random points with random depths up to 2^32. If this gets converted to floats precision is lost so the data read out again is not the same as what was put in.
This is the wrong solution to the problem. To answer your question itself, gl_Position is a vec4. And therefore, the depth that OpenGL sees is a float. There's nothing you can do to change that, short of ignoring the depth buffer entirely and doing "depth tests" yourself in the fragment shader.
The preferred solution to the problem is to use a floating-point depth buffer. Using GL_DEPTH_COMPONENT_32F or something of the kind. But that alone is insufficient, due to an unfortunate legacy issue with how OpenGL defines its coordinate transforms. See, floats put a lot of precision into the range [0, 1], but it's biased closer to zero. But because of the way OpenGL defines its transforms, that precision gets lost along the way; effectively, the exponent part of the float never gets used. It makes a 32-bit float seem like a 24-bit fixed-point value.
OpenGL has fixed that problem with ARB_clip_control, which restores the ability to use full 32-bit floats effectively. You should attempt to employ that if possible.

Access different Fragment in Fragmentshader OpenGL

Can I access and change output values of another Fragment at a certain location in the Fragmentshader?
For example in the main() loop I process everything just like usualy and output the color with some value. But in adition to that I also want the fragment at position vec3(5,3,6) (in world coordinates) to have the same colour.
Now I already did some researche on the web on that. The OpenGL site says, the fragmentshader has one fragment as input and has one fragment as output, which doesnt sound very promising.
Also I know that all fragments are being processed in parallel. But maybe it is posible to say, if the fragment at this position has not been processed yet, write this color to it and take this fragment as already processed.
My be someone can explain if this is posible somehow and if not, why this is not a good idea. The best guess I would have is, to build this logic into the shader, it would have a very bad effect on the general performance.
My be someone can explain if this is posible somehow and if not, why this is not a good idea.
It's not a question of bad idea vs. good idea. It's simply not possible.
The closest you can get to this functionality is ARB_fragment_shader_interlock. Through its interlock and ordering guarantees, it allows limited interoperation. And that limitation is... it only allows interoperation for fragments that cover the same pixel/sample.
So even this functionality does not allow you to write to some other pixel.
The absolute best you can do is use SSBOs and atomic counters to have fragment shaders write what color values and "world coordinates" they would like to write to, then have a second process execute that buffer as either a rendering command or a compute shader to actually write that data.
As already pointed out in Nicol's answer, you can't write to additional fragments of a framebuffer surface in the fragment shader.
The description of your use case is not clear enough to tell what might work best. In the interest of brainstorming, the most direct approach that comes to mind is that you don't use a framebuffer draw surface at all, but output to an image instead.
If you bind a texture as an image, you can write to it in the fragment shader using the imageStore() built-in function. This function takes coordinates as one of the argument, so you can write to any pixel you want, as well as write multiple pixels from the same shader invocation.
Depending on what exactly you want to achieve, I could also imagine a hybrid approach, where your primary rendering still goes to a framebuffer, but you write additional pixel values to an image at the desired positions. Then, in a second rendering pass, you can combine the content of the image with the primary rendering. The combination could be done with blending if the math/logic is simple enough. If you need a more complex combination, you can use a texture as the framebuffer attachment of the initial pass, and then use the result of the rendering and the extra image as two inputs for the fragment shader of the combination pass.

Defining a custom Blend Function (OpenGL)

For implementing a physically accurate motion blur by actually rendering at intermediate locations, it seems that to do this correctly I need a special blending function. Additive blending would only work on a black background, and the standard "transparency" function (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) may look okay for small numbers of samples, but it is physically inaccurate because samples rendered at the end will contribute more to the resulting color.
The function I need has to produce a color which is the weighted average of the original and destination colors, depending on the number of samples covering a fragment. However I can generalize this to better account for rendering differences between samples: Suppose I am to render a blurred object n times. Treating color as a 3-vector, Let D be the color DEST - SRC. I want each render to add D/n to the source color.
Can this be done using the fixed-function pipeline? The glBlendFunc reference is rather cryptic, at least to me. It seems like this can be done either trivially or is impossible. It seems like I would want to set alpha to 1/n. For the behavior I just described, am I in need of a GL_DEST_MINUS_SRC_COLOR option?
I also have a related question: At which stage does this blending operation occur? Before or after the fragment shader program? Would i be able to access the source and destination colors in a fragment shader?
I know that one way to accomplish what I want is by using an accumulation buffer. I do not want to do this because it is a waste of memory and fillrate.
The solution I ended up using to implement my effect is a combination of additive blending and a render target that I access as a texture from the fragment shader.