How can I simulate "Glow Dodge" blending by using OpenGL Shader? - opengl

I wanted to simulate 'glow dodge' effect on clip studio using opengl and its shader.
So I found out that following equation is how 'glow dodge' works.
final.rgb = dest.rgb / (1 - source.rgb)
Then I came up with 2 ways to actually do it, but neither one doesn't seem to work.
First one was to calculate 1 / (1 - source.rgb) in the shader, and do multiply blending by using glBlendfunc(GL_ZERO, GL_SRC_COLOR), or glBlendfunc(GL_DST_COLOR, GL_ZERO).
but as khronos page says, all scale factors have range 0 to 1. which means I can't multiply the numbers over 1. so I can't use this method cuz most of the case goes more than 1.
Second one was to bring background texels by using glReadPixel(), then calculate everything in shader, then do additive blending by using glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA).
regardless the outcome I could get, glReadPixel() itself takes way too much time even with a 30 x 30 texel area. so I can't use this method.
I wonder if there's any other way to get an outcome as glowdodge blending mode should have.

With the extension EXT_shader_framebuffer_fetch a value from the framebuffer can be read:
This extension provides a mechanism whereby a fragment shader may read existing framebuffer data as input. This can be used to implement compositing operations that would have been inconvenient or impossible with fixed-function blending. It can also be used to apply a function to the framebuffer color, by writing a shader which uses the existing framebuffer color as its only input.
The extension is available for dektop OpenGL and OpenGL ES. An OpenGL extension has to be enabled (see Core Language (GLSL)- Extensions. The fragment color can be retrieved by the built in variable gl_LastFragData. e.g.:
#version 400
#extension GL_EXT_shader_framebuffer_fetch : require
out vec4 fragColor;
# [...]
void main
{
# [...]
fragColor = vec4(gl_LastFragData[0].rgb / (1.0 - source.rgb), 1.0);
}

Related

Does dual-source blending require larger color buffer?

In OpenGL, we can turn on dual-source blending through following code in fragment shader:
layout(location = 0, index = 0) out vec4 color1;
layout(location = 0, index = 1) out vec4 color2;
And through token XX_SRC1_XX get color2 in blending functions. I have questions that:
If I want to do off-screen rendering, do I need to double the texture's storage's size as there are two colors output.
Is it that once I turn on dual-source blending then I can only output two colors to one buffer? And it means that I cannot bind more than one color buffers through attaching them to GL_COLOR_ATTACHMENTi tokens.
Is the qualifier 'index' here only used for dual-source blending purpose?
Dual-source blending means blending involving two color outputs from the FS. That has nothing to do with the nature of the images being written to (only the number). Location 0 still refers to the 0th index in the glDrawBuffers array, and the attachment works just as it did previously.
You are outputting two colors. But only one value (the result of the blend operation) gets written.
And it means that I cannot bind more than one color buffers through attaching them to GL_COLOR_ATTACHMENTi tokens.
Well, that depends on the number of attached images allowed when doing dual-source blending, as specified by your implementation. Granted, all known implementations set this to 1. Even Vulkan implementations all use 1 (or 0 if they don't support dual-source blending); the only ones that don't seem to be either early, broken implementations or software renderers,

Using 'discard' in GLSL 4.1 fragment shader with multisampling

I'm attempting depth peeling with multisampling enabled, and having some issues with incorrect data ending up in my transparent layers. I use the following to check if a sample (originally a fragment) is valid for this pass:
float depth = texelFetch(depthMinima, ivec2(gl_FragCoord.xy), gl_SampleID).r;
if (gl_FragCoord.z <= depth)
{
discard;
}
Where depthMinima is defined as
uniform sampler2DMS depthMinima;
I have enabled GL_SAMPLE_SHADING which, if I understand correctly, should result in the fragment shader being called on a per-sample basis. If this isn't the case, is there a way I can get this to happen?
The result is that the first layer or two look right, but beneath that (and I'm doing 8 layers) I start getting junk values - mostly plain blue, sometimes values from previous layers.
This works fine for single-sampled buffers, but not for multi-sampled buffers. Does the discard keyword still discard the entire fragment?
I have enabled GL_SAMPLE_SHADING which, if I understand correctly, should result in the fragment shader being called on a per-sample basis.
It's not enough to only enable GL_SAMPLE_SHADING. You also need to set:
glMinSampleShading(1.0f)
A value of 1.0 indicates that each sample in the framebuffer should be indpendently shaded. A value of 0.0 effectively allows the GL to ignore sample rate shading. Any value between 0.0 and 1.0 allows the GL to shade only a subset of the total samples within each covered fragment. Which samples are shaded and the algorithm used to select that subset of the fragment's samples is implementation dependent.
– glMinSampleShading
In other words 1.0 tells it to shade all samples. 0.5 tells it to shade at least half the samples.
// Check the current value
GLfloat value;
glGetFloatv(GL_MIN_SAMPLE_SHADING_VALUE, &value);
If either GL_MULTISAMPLE or GL_SAMPLE_SHADING is disabled then sample shading has no effect.
There'll be multiple fragment shader invocations for each fragment, to which each sample is a subset of the fragment. In other words. Sample shading specifies the minimum number of samples to process for each fragment.
If GL_MIN_SAMPLE_SHADING_VALUE is set to 1.0 then there'll be issued a fragment shader invocation for each sample (within the primitive).
If its set to 0.5 then there'll be a shader invocation for every second sample.
max(ceil(MIN_SAMPLE_SHADING_VALUE * SAMPLES), 1)
Each being evaluated at their sample location (gl_SamplePosition).
With gl_SampleID being the index of the sample that is currently being processed.
Should discard work on a per-sample basis, or does it still only work per-fragment?
With or without sample shading discard still only terminate a single invocation of the shader.
Resources:
ARB_sample_shading
Fragment Shader
Per-Sample Processing
I faced a similar problem when using depth_peeling on a multi-sample buffer.
Some artifacts appears due to the depth_test error when using a multi_sample depth texture from the previous peel and the current fragment depth.
vec4 previous_peel_depth_tex = texelFetch(previous_peel_depth, coord, 0);
the third argument is the sample you want to use for your comparison which will give a different value from the fragment center. Like the author said you can use gl_SampleID
vec4 previous_peel_depth_tex = texelFetch(previous_peel_depth, ivec2(gl_FragCoord.xy), gl_SampleID);
This solved my problem but with a huge performance drop, if you have 4 samples you will run your fragment shader 4 times, if 4 have peels it means 4x4 calls. You don't need to set the opengl flags if atleast glEnable(GL_MULTISAMPLE); is on
Any static use of [gl_SampleID] in a fragment shader causes the entire
shader to be evaluated per-sample
I decided to use a different approach and to add a bias when doing the depth comparison
float previous_linearized = linearize_depth(previous_peel_depth_tex.r, near, far);
float current_linearized = linearize_depth(gl_FragCoord.z, near, far);
float bias_meter = 0.05;
bool belong_to_previous_peel = delta_depth < bias_meter;
This solve my problem but some artifacts might still appears and you need to adjust your bias in your eye_space units (meter, cm, ...)

Get current fragment color

Hey I currently have a system in OpenGL that uses glBlendFunc for bleding diffrent shaders but I would like to do something like this
fragColor = currentColor * lightAmount;
I tried to use gl_Color but its depricated and my engine will not let me use it.
According to this document there is no built-in access for the fragment color in the fragment shader.
What you could do is render your previous passes in another textures, send those textures to the GPU (as uniforms) and do the blending in your last pass.

Draw a geometric object and texture in different coordinates using same shader in Opengl (GLSL)

I wonder if there is a nice (at least any) way to draw some geometric shape and a texture using same shader program in opengl 2 (or maybe higher).
Saw this example in a book for a fragmnet shader (as an example of how glTexEnvi func from Opegl 1 can be replaced in Opengl >= 2 version):
precision mediump float;
uniform sampler2D s_tex0;
varying vec2 v_texCoord;
varying vec4 v_primaryColor;
void main()
{
gl_FragColor = texture2D(s_tex0, v_texCoord) * v_primaryColor;
}
Though it is very hard for me to guess the vertex shader, if i want to draw texture and some geometry in different coordinates (possibly intersecting in some place).
Does anybody have an idea?
There has to be a way. It will just make some things (for example different blendings) so much easier to do.
P.S. Had an idea of using a "switcher" in vertex shader to pass different coordinates wheather it is in "1" or "0" state, somewhy it didn't workout. Hope you know a better solution.
I'll just leave it here.
Though i still don't know the possible vertex shader for the question above i was lucky enough to solve my subgoal a harder way using blending.
It turned out that blending with constants GL_ONE_MINUS_DST_ALPHA, GL_DST_ALPHA didn't work as expected (when destination are vertices) because alpha channel for pixels was "turned off" by default (you could still use alpha channel from image), so you have to "turn it on" to make blending with these constants work properly.
In android studio (and java overall) it is possible to do it using setEGLConfigChooser function.

Texture lookup into rendered FBO is off by half a pixel

I have a scene that is rendered to texture via FBO and I am sampling it from a fragment shader, drawing regions of it using primitives rather than drawing a full-screen quad: I'm conserving resources by only generating the fragments I'll need.
To test this, I am issuing the exact same geometry as my texture-render, which means that the rasterization pattern produced should be exactly the same: When my fragment shader looks up its texture with the varying coordinate it was given it should match up perfectly with the other values it was given.
Here's how I'm giving my fragment shader the coordinates to auto-texture the geometry with my fullscreen texture:
// Vertex shader
uniform mat4 proj_modelview_mat;
out vec2 f_sceneCoord;
void main(void) {
gl_Position = proj_modelview_mat * vec4(in_pos,0.0,1.0);
f_sceneCoord = (gl_Position.xy + vec2(1,1)) * 0.5;
}
I'm working in 2D so I didn't concern myself with the perspective divide here. I just set the sceneCoord value using the clip-space position scaled back from [-1,1] to [0,1].
uniform sampler2D scene;
in vec2 f_sceneCoord;
//in vec4 gl_FragCoord;
in float f_alpha;
out vec4 out_fragColor;
void main (void) {
//vec4 color = texelFetch(scene,ivec2(gl_FragCoord.xy - vec2(0.5,0.5)),0);
vec4 color = texture(scene,f_sceneCoord);
if (color.a == f_alpha) {
out_fragColor = vec4(color.rgb,1);
} else
out_fragColor = vec4(1,0,0,1);
}
Notice I spit out a red fragment if my alpha's don't match up. The texture render sets the alpha for each rendered object to a specific index so I know what matches up with what.
Sorry I don't have a picture to show but it's very clear that my pixels are off by (0.5,0.5): I get a thin, one pixel red border around my objects, on their bottom and left sides, that pops in and out. It's quite "transient" looking. The giveaway is that it only shows up on the bottom and left sides of objects.
Notice I have a line commented out which uses texelFetch: This method works, and I no longer get my red fragments showing up. However I'd like to get this working right with texture and normalized texture coordinates because I think more hardware will support that. Perhaps the real question is, is it possible to get this right without sending in my viewport resolution via a uniform? There's gotta be a way to avoid that!
Update: I tried shifting the texture access by half a pixel, quarter of a pixel, one hundredth of a pixel, it all made it worse and produced a solid border of wrong values all around the edges: It seems like my gl_Position.xy+vec2(1,1))*0.5 trick sets the right values, but sampling is just off by just a little somehow. This is quite strange... See the red fragments? When objects are in motion they shimmer in and out ever so slightly. It means the alpha values I set aren't matching up perfectly on those pixels.
It's not critical for me to get pixel perfect accuracy for that alpha-index-check for my actual application but this behavior is just not what I expected.
Well, first consider dropping that f_sceneCoord varying and just using gl_FragCoord / screenSize as texture coordinate (you already have this in your example, but the -0.5 is rubbish), with screenSize being a uniform (maybe pre-divided). This should work almost exact, because by default gl_FragCoord is at the pixel center (meaning i+0.5) and OpenGL returns exact texel values when sampling the texture at the texel center ((i+0.5)/textureSize).
This may still introduce very very very slight deviations form exact texel values (if any) due to finite precision and such. But then again, you will likely want to use a filtering mode of GL_NEAREST for such one-to-one texture-to-screen mappings, anyway. Actually your exsiting f_sceneCoord approach may already work well and it's just those small rounding issues prevented by GL_NEAREST that create your artefacts. But then again, you still don't need that f_sceneCoord thing.
EDIT: Regarding the portability of texelFetch. That function was introduced with GLSL 1.30 (~SM4/GL3/DX10-hardware, ~GeForce 8), I think. But this version is already required by the new in/out syntax you're using (in contrast to the old varying/attribute syntax). So if you're not gonna change these, assuming texelFetch as given is absolutely no problem and might also be slightly faster than texture (which also requires GLSL 1.30, in contrast to the old texture2D), by circumventing filtering completely.
If you are working in perfect X,Y [0,1] with no rounding errors that's great... But sometimes - especially if working with polar coords, you might consider aligning your calculated coords to the texture 'grid'...
I use:
// align it to the nearest centered texel
curPt -= mod(curPt, (0.5 / vec2(imgW, imgH)));
works like a charm and I no longer get random rounding errors at the screen edges...