OpenGL gamma correction with GL_FRAMEBUFFER_SRGB - opengl

I am writing a Blinn Phong (LearnOpenGL tutorial) lighting and trying to get gamma corrected colors as final result.
Before i start digging about gamma correction, my lighting looks like this:
Then i spend all day reading about sRgb colorspace, and what is its meaning.
I came up to conclusion that having gamma corrected colors results in better lighing quality.
I dont wanna bother myself with manually adjusting gamma in fragment shader. I can let opengl do it with enabling GL_FRAMEBUFFER_SRGB, and requesting sRgb capable framebufer.(?)
This is with enabled GL_FRAMEBUFFER_SRGB and textures loaded in GL_RGB format:
This is expected results. Colors a more brighter, because sampled textures are color corrected twice.(?)
This is with enabled GL_FRAMEBUFFER_SRGB and textures loaded in GL_SRGB format:
This is unexpected results. Everything is too dark, and no diffuse color visible, only specular.
My question is:
To let opengl do gamma correction on its own (according to LearnOpenGL tutorial), i need to:
• have a framebuffer with SRGB support;
• load textures with SRGB flag;
What im doing wrong, or maby i miss something important?
Update: diffuse texture loaded with GL_SRGB format, while specular in GL_RGB. In last image it seems like diffuse color is totally attenuated(?), while specular sampled as expected.
My attenuation function is:
float attenuation = max(1.0 - distance / light.radius, 0.0);

Related

GLSL correct Gamma correction for premultiplied output

I am performing gamma correction of linear input in a fragment shader. But in my case the output pixel is also premultiplied (for the sake of correct alpha blending). I am not sure what is the correct order in this case:
vec4 pixelOut = vec4(pow(color.xyz,vec3(1.0/2.2)) * alpha, alpha);
Or:
vec4 pixelOut = vec4(pow(color.xyz * alpha,vec3(1.0/2.2)) , alpha);
Should I gamma correct the pixel color after it's premultiplied, or before?
Visually,I can't currently detect any difference.
gamma correction should be the very last step in your shading pipeline. blending in sRGB space leads to incorrect results.
for a good explanation / discussion on this, have a look at Naty Hoffman's blog article Adventures with Gamma-Correct Rendering.
The correct thing to do is to perform blending like any other shading calculation – in linear space.
Therefore you should first premultiply with alpha and then take care of gamma correction.

OpenGL shader gamma on solid color

I want to implement gamma correction on my OpenGL 3D renderer, I understand that it's absolutely relevant on texture loaded in sRGB, so I do this:
vec4 texColor;
texColor = texture(src_tex_unit0, texCoordVarying);
texColor = vec4(pow(texColor.rgb,vec3(2.2)),1.0);
vec4 colorPreGamma = texColor * (vec4(ambient,1.0) + vec4(diffuse,1.0));
fragColor = vec4(pow(colorPreGamma.rgb, vec3(1.0/gamma)),1.0);
But my question is about solid color, when the surface of the 3D object I want lit is not textured but just colored by a per vertex RGB value. In this case, do I have to transform my color in Linear space, and after the lighting operation, transform back to gamma space like I do for a texture?
Does this apply when my light are colored?
In this case, do I have to transform my color in Linear space, and after the lighting operation, transform back to gamma space like I do for a texture?
That depends: what colorspace are your colors in?
You're not doing this correction because of where they come from; you're doing it because of what the colors actually are. If the value is not linear, then you must linearize it before using it, regardless of where it comes from.
You are ultimately responsible for putting that color there. So you must have to know whether that color is in linear RGB or sRGB colorspace. And if the color is not linear, then you have to linearize it before you can get meaningful numbers from it.
In OpenGL there isn't a huge distinction between color data and other kinds of data: if you have a vec3 you can access components as .xyz or .rgb. It's all just data.
So ask yourself this: "Do I have to gamma correct my vertex positions in the vertex shader?"
Of course not, because your vertex positions are already in linear space. So if you are similarly setting your vertex colors in linear space, again no gamma correction is needed.
In other words, do you imagine vec3(0.5, 0.5, 0.5) as being a gray that is visually halfway between black and white? Then you need gamma correction.
Do you imagine it as being mathematically halfway between black and white (in terms of measurable light intensity)? Then it's already linear.

Why is my GLSL shader rendering a cleavage?

I'm working on a deferred lighting technique in 2D, using a frame buffer to accumulate light sources using the GL_MAX blend equation.
Here's what I get when rendering one light source (the geometry is a quad without a texture, I'm only using a fragment shader for colouring) to my buffer:
Which is exactly what I want - attenuation from the light source. However, when two light sources are near each other, when they overlap, they seem to produce a lower RGB value where they meet, like so:
Why is there a darker line between the two? I was expecting that with GL_MAX blend equation they would smoothly blend into each other, using the maximal value of the fragments in each location.
Here's the setup for the FBO (using LibGDX):
Gdx.gl.glClearColor(0.14f, 0.14f, 0.19f, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
Gdx.gl.glBlendEquation(GLMAX_BLEND_EQUATION);
Gdx.gl.glBlendFunc(GL20.GL_SRC_COLOR, GL20.GL_DST_COLOR);
Gdx.gl.glEnable(GL20.GL_BLEND);
I don't think the call to glBlendFunc is actually necessary when using this equation. GLMAX_BLEND_EQUATION is set to 0x8008.
varying vec2 v_texCoords;
varying vec2 v_positionRelativeToLight;
uniform sampler2D u_texture;
uniform vec3 u_lightPosition;
uniform vec3 u_lightColor;
void main() {
float distanceToLight = length(v_positionRelativeToLight);
float falloffVarA = 0.1;
float falloffVarB = 1.0;
float attenuation = 1.0 / (1.0 + (falloffVarA*distanceToLight) + (falloffVarB*distanceToLight*distanceToLight));
float minDistanceOrAttenuation = min(attenuation, 1.0-distanceToLight);
float combined = minDistanceOrAttenuation * attenuation;
gl_FragColor = vec4(combined, combined, combined, 1.0);
}
There are extra variables passed in there as this fragment shader is usually more complicated, but I've cut it down to just show how the attenuation and blending is behaving.
This happens between every light source that I render where they meet - rather than the colour that I'm expecting, the meeting of two light sources - the equidistant point between the two quads, is a darker colour that I'm expecting. Any idea why and how to fix it?
This is the result of subtracting the first image from the second:
The background on the first isn't quite black, hence the yellowing on the right, but otherwise you can clearly see the black region on the left where original values were preserved, the darker arc where values from both lights were evaluated but the right was greater, then all the area on the right that the original light didn't touch.
I therefore think you're getting max-pick blending. But what you want is additive blending:
Gdx.gl.glBlendFunc(GL20.GL_ONE, GL20.GL_ONE);
... and leave the blend equation on the default of GL_FUNC_ADD.
Your result is the expected appearance for maximum blending (which is just like the lighten blend mode in Photoshop). The dark seam looks out of place perhaps because of the non-linear falloff of each light, but it's mathematically correct. If you introduce a light with a bright non-white color to it, it will look much more objectionable.
You can get around this if you render your lights to a frame buffer with inverted colors and multiplicative blending, and then render the frame buffer with inverted colors. Then the math works out to not have the seams, but it won't look unusually bright like what additive blending produces.
Use a pure white clear color on your frame buffer and then render the lights with the standard GL_ADD blend equation and the blend function GL_ONE_MINUS_DST_COLOR. Then render your FBO texture to the screen, inverting the colors again.
Two lights drawn using your method
Two lights drawn additively
Two lights, drawn sequentially with GL_ONE_MINUS_DST_COLOR, GL_ZERO and GL_ADD
The above result, inverted

Directx9 Specular Mapping

How would I implement loading a texture to be used as a specular map for a piece of geometry and rendering it in Directx9 using C++?
Are there any tutorials or basic examples I can refer to?
Use D3DXCreateTextureFromFile to load the file from disk. You then need to set up a shader that multiplies the specular value by the value stored in the texture. This gives you the specular colour.
So you're final pixel comes from
Final = ambient + (N.L * texture colour) + (N.H * texture specular)
You can do this easily in a shader.
Its also worth noting that it can be very useful to store the per texel specular in the alpha channel of the texture. This way you only need one texture around, though it does break per-pixel transparency.

OpenGL: texture and plain color respond differently to ambient light?

This is a rather old problem I've had with an OpenGL application.
I have a rather complex model, some polygons in it are untextured and colored using a plain color with glColor() and others are textured. Some of the texture is the same color as the untextured polygons and there should be no visible seam between the two.
The problem is that when I turn up the ambient component of the light source, a seam between the two kinds of polygons emerge.
see this image: http://www.shiny.co.il/shooshx/colorBug2.png
The left image is without any ambient light and the right image is with ambient light of (0.2,0.2,0.2).
the RGB value of the color on the texture is identical to the RGB value of the colored faces. The textures alpha is set to 1.0 everywhere.
To shade the texture I use GL_MODULATE.
Can anyone think of a reason why that would happen and of a possible solution?
You mention that you set the color with glColor(), so I assume that GL_COLOR_MATERIAL is on? What setting do you use for glColorMaterial()? In this case it should be GL_AMBIENT_AND_DIFFUSE, so that the glColor() call affects the ambient color as well as the diffuse color. (This is the default.)
You could also try to set all material colours to white (with glMaterial()) before rendering the texture mapped faces. With some settings (don't remember which), the texture itself gets modulated by the current color.
Hope this helps or at least points you into a useful direction.