GLSL correct Gamma correction for premultiplied output - opengl

I am performing gamma correction of linear input in a fragment shader. But in my case the output pixel is also premultiplied (for the sake of correct alpha blending). I am not sure what is the correct order in this case:
vec4 pixelOut = vec4(pow(color.xyz,vec3(1.0/2.2)) * alpha, alpha);
Or:
vec4 pixelOut = vec4(pow(color.xyz * alpha,vec3(1.0/2.2)) , alpha);
Should I gamma correct the pixel color after it's premultiplied, or before?
Visually,I can't currently detect any difference.

gamma correction should be the very last step in your shading pipeline. blending in sRGB space leads to incorrect results.
for a good explanation / discussion on this, have a look at Naty Hoffman's blog article Adventures with Gamma-Correct Rendering.
The correct thing to do is to perform blending like any other shading calculation – in linear space.
Therefore you should first premultiply with alpha and then take care of gamma correction.

Related

OpenGL gamma correction with GL_FRAMEBUFFER_SRGB

I am writing a Blinn Phong (LearnOpenGL tutorial) lighting and trying to get gamma corrected colors as final result.
Before i start digging about gamma correction, my lighting looks like this:
Then i spend all day reading about sRgb colorspace, and what is its meaning.
I came up to conclusion that having gamma corrected colors results in better lighing quality.
I dont wanna bother myself with manually adjusting gamma in fragment shader. I can let opengl do it with enabling GL_FRAMEBUFFER_SRGB, and requesting sRgb capable framebufer.(?)
This is with enabled GL_FRAMEBUFFER_SRGB and textures loaded in GL_RGB format:
This is expected results. Colors a more brighter, because sampled textures are color corrected twice.(?)
This is with enabled GL_FRAMEBUFFER_SRGB and textures loaded in GL_SRGB format:
This is unexpected results. Everything is too dark, and no diffuse color visible, only specular.
My question is:
To let opengl do gamma correction on its own (according to LearnOpenGL tutorial), i need to:
• have a framebuffer with SRGB support;
• load textures with SRGB flag;
What im doing wrong, or maby i miss something important?
Update: diffuse texture loaded with GL_SRGB format, while specular in GL_RGB. In last image it seems like diffuse color is totally attenuated(?), while specular sampled as expected.
My attenuation function is:
float attenuation = max(1.0 - distance / light.radius, 0.0);

OpenGL shader gamma on solid color

I want to implement gamma correction on my OpenGL 3D renderer, I understand that it's absolutely relevant on texture loaded in sRGB, so I do this:
vec4 texColor;
texColor = texture(src_tex_unit0, texCoordVarying);
texColor = vec4(pow(texColor.rgb,vec3(2.2)),1.0);
vec4 colorPreGamma = texColor * (vec4(ambient,1.0) + vec4(diffuse,1.0));
fragColor = vec4(pow(colorPreGamma.rgb, vec3(1.0/gamma)),1.0);
But my question is about solid color, when the surface of the 3D object I want lit is not textured but just colored by a per vertex RGB value. In this case, do I have to transform my color in Linear space, and after the lighting operation, transform back to gamma space like I do for a texture?
Does this apply when my light are colored?
In this case, do I have to transform my color in Linear space, and after the lighting operation, transform back to gamma space like I do for a texture?
That depends: what colorspace are your colors in?
You're not doing this correction because of where they come from; you're doing it because of what the colors actually are. If the value is not linear, then you must linearize it before using it, regardless of where it comes from.
You are ultimately responsible for putting that color there. So you must have to know whether that color is in linear RGB or sRGB colorspace. And if the color is not linear, then you have to linearize it before you can get meaningful numbers from it.
In OpenGL there isn't a huge distinction between color data and other kinds of data: if you have a vec3 you can access components as .xyz or .rgb. It's all just data.
So ask yourself this: "Do I have to gamma correct my vertex positions in the vertex shader?"
Of course not, because your vertex positions are already in linear space. So if you are similarly setting your vertex colors in linear space, again no gamma correction is needed.
In other words, do you imagine vec3(0.5, 0.5, 0.5) as being a gray that is visually halfway between black and white? Then you need gamma correction.
Do you imagine it as being mathematically halfway between black and white (in terms of measurable light intensity)? Then it's already linear.

Why is my GLSL shader rendering a cleavage?

I'm working on a deferred lighting technique in 2D, using a frame buffer to accumulate light sources using the GL_MAX blend equation.
Here's what I get when rendering one light source (the geometry is a quad without a texture, I'm only using a fragment shader for colouring) to my buffer:
Which is exactly what I want - attenuation from the light source. However, when two light sources are near each other, when they overlap, they seem to produce a lower RGB value where they meet, like so:
Why is there a darker line between the two? I was expecting that with GL_MAX blend equation they would smoothly blend into each other, using the maximal value of the fragments in each location.
Here's the setup for the FBO (using LibGDX):
Gdx.gl.glClearColor(0.14f, 0.14f, 0.19f, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
Gdx.gl.glBlendEquation(GLMAX_BLEND_EQUATION);
Gdx.gl.glBlendFunc(GL20.GL_SRC_COLOR, GL20.GL_DST_COLOR);
Gdx.gl.glEnable(GL20.GL_BLEND);
I don't think the call to glBlendFunc is actually necessary when using this equation. GLMAX_BLEND_EQUATION is set to 0x8008.
varying vec2 v_texCoords;
varying vec2 v_positionRelativeToLight;
uniform sampler2D u_texture;
uniform vec3 u_lightPosition;
uniform vec3 u_lightColor;
void main() {
float distanceToLight = length(v_positionRelativeToLight);
float falloffVarA = 0.1;
float falloffVarB = 1.0;
float attenuation = 1.0 / (1.0 + (falloffVarA*distanceToLight) + (falloffVarB*distanceToLight*distanceToLight));
float minDistanceOrAttenuation = min(attenuation, 1.0-distanceToLight);
float combined = minDistanceOrAttenuation * attenuation;
gl_FragColor = vec4(combined, combined, combined, 1.0);
}
There are extra variables passed in there as this fragment shader is usually more complicated, but I've cut it down to just show how the attenuation and blending is behaving.
This happens between every light source that I render where they meet - rather than the colour that I'm expecting, the meeting of two light sources - the equidistant point between the two quads, is a darker colour that I'm expecting. Any idea why and how to fix it?
This is the result of subtracting the first image from the second:
The background on the first isn't quite black, hence the yellowing on the right, but otherwise you can clearly see the black region on the left where original values were preserved, the darker arc where values from both lights were evaluated but the right was greater, then all the area on the right that the original light didn't touch.
I therefore think you're getting max-pick blending. But what you want is additive blending:
Gdx.gl.glBlendFunc(GL20.GL_ONE, GL20.GL_ONE);
... and leave the blend equation on the default of GL_FUNC_ADD.
Your result is the expected appearance for maximum blending (which is just like the lighten blend mode in Photoshop). The dark seam looks out of place perhaps because of the non-linear falloff of each light, but it's mathematically correct. If you introduce a light with a bright non-white color to it, it will look much more objectionable.
You can get around this if you render your lights to a frame buffer with inverted colors and multiplicative blending, and then render the frame buffer with inverted colors. Then the math works out to not have the seams, but it won't look unusually bright like what additive blending produces.
Use a pure white clear color on your frame buffer and then render the lights with the standard GL_ADD blend equation and the blend function GL_ONE_MINUS_DST_COLOR. Then render your FBO texture to the screen, inverting the colors again.
Two lights drawn using your method
Two lights drawn additively
Two lights, drawn sequentially with GL_ONE_MINUS_DST_COLOR, GL_ZERO and GL_ADD
The above result, inverted

Need some clarification with "blending" in direct3d 11

I have searched through google and found results that explain how blending works in direct3d 11. So I'm making this post just to validate on whether or not I completely understand these concepts.
For the most part, I somewhat understand the concept of blending. We blend colors by combining two colors and producing a final color. This is mostly done with two equations that direct3d has in the output merger stage.
FinalColor = (Source Color * Source Blend Factor) + (Destination Color * Destination Blend Factor)
and
FinalAlpha = (Source Alpha * Source Alpha Blend Factor) + (Destination Alpha * Destination Alpha Blend Factor)
The color sources Source Color and Source Alpha are defined by whatever the Pixel Shader outputs. And the destinations Destination Color and Destination Alpha are defined by whatever color is in the Render Target (backbuffer).
Now, i have a little bit of difficulty understanding the blend factors Source Blend Factor, Destination Blend Factor, Source Alpha Blend Factor and Destination Alpha Blend factor
Since I know that these blend factors are defined by D3D11_RENDER_TARGET_BLEND_DESC, i can use the member SrcBlend and assign it the flag D3D11_BLEND_SRC_COLOR... this would mean that the blend factor Source Blend Factor will have the color that the pixel shader outputs.
So do you think that i understand the concept of blending? or is there something that i am missing? (Feel free to correct me)
The equations for blending are even more flexible than the ones you wrote above. The generic blending equation is:
DestColor = Color1 <ColorOp> Color2
DestAlpha = Alpha1 <AlphaOp> Alpha2
While Color1 is often the output from the pixel shader, it (along with Color2, Alpha1, and Alpha2) can be any of the values in the BLEND enum. Likewise, while the blend operator is usually +, it can actually be any of the values in the BLEND_OP enum. Together these provide the complete set of all possible normalized linear blending operations. For an example of what values to use for "traditional alpha blending" see this answer.

GL_FRAMEBUFFER_SRGB_EXT banding problems (gamma correction)

Consider the following code. imageDataf is a float*. In fact, as the code shows it consist of float4 values created by a ray tracer. Of course, the color values are in linear space and I need them gamma corrected for output on screen.
So what I can do is a simple for loop with a gamma correction of 2.2 (see for loop). Also, i can use GL_FRAMEBUFFER_SRGB_EXT, which works almost correclty but has "banding" problems.
Left is using GL_FRAMEBUFFER_SRGB_EXT, right is manual gamma correction. Right picture looks perfect. There may be some difficulties to spot it on some monitors. Does anyone have a clue how to fix this problem? I would like to do gamma correction for "free" as the CPU version makes the GUI a bit laggy. Note that the actual ray tracing is done in another thread using GPU(optix) so in fact its about as fast in rendering performance.
GLboolean sRGB = GL_FALSE;
glGetBooleanv( GL_FRAMEBUFFER_SRGB_CAPABLE_EXT, &sRGB );
if (sRGB) {
//glEnable(GL_FRAMEBUFFER_SRGB_EXT);
}
for(int i = 0; i < 768*768*4; i++)
{
imageDataf[i] = (float)powf(imageDataf[i], 1.0f/2.2f);
}
glPixelStorei(GL_UNPACK_ALIGNMENT, 8);
glDrawPixels( static_cast<GLsizei>( buffer_width ), static_cast<GLsizei>( buffer_height ),
GL_RGBA, GL_FLOAT, (GLvoid*)imageDataf);
//glDisable(GL_FRAMEBUFFER_SRGB_EXT);
When GL_FRAMEBUFFER_SRGB is enabled, this means that OpenGL will assume that the colors for a fragment are in a linear colorspace. Therefore, when it writes them to an sRGB-format image, it will convert them internally from linear to sRGB. Except... your pixels are not linear. You already converted them to a non-linear colorspace.
However, I'll assume that you simply forgot an if statement in there. I'll assume that if the framebuffer is sRGB capable, you skip the loop and upload the data directly. So instead, I'll explain why you're getting banding.
You're getting banding because the OpenGL operation you asked for does the following. For each color you specify:
Clamp the floats to the [0, 1] range.
Convert the floats to unsigned, normalized, 8-bit integers.
Generate a fragment with that unsigned, normalized, 8-bit color.
Convert the unsigned, normalized, 8-bit fragment color from linear RGB space to sRGB space and store it.
Steps 1-3 all come from the use of glDrawPixels. Your problem is step 2. You want to keep your floating-point values as floats. Yet you insist on using the fixed-function pipeline (ie: glDrawPixels), which forces a conversion from float to unsigned normalized integers.
If you uploaded your data to a float texture and used a proper fragment shader to render this texture (even just a simple gl_FragColor = texture(tex, texCoord); shader), you'd be fine. The shader pipeline uses floating-point math, not integer math. So no such conversion would occur.
In short: stop using glDrawPixels.