I'm writing a PBR shader and I'm trying to determine the correct way to generate the alpha value that is output for the image. The goal is to have a final image that is 'premultiplied alpha', so it can be used with another Over composite operation later on (say, compositing it over another background image). This all works great, except for the case of specular highlights on transparent surfaces.
Reading this article:
https://google.github.io/filament/Filament.md.html#lighting/transparencyandtranslucencylighting
In particular:
Observe a window and you will see that the diffuse reflectance is
transparent. On the other hand, the brighter the specular reflectance,
the less opaque the window appears. This effect can be seen in figure
63: the scene is properly reflected onto the glass surfaces but the
specular highlight of the sun is bright enough to appear opaque.
They also include the code snippet:
// baseColor has already been premultiplied
vec4 shadeSurface(vec4 baseColor) {
float alpha = baseColor.a;
vec3 diffuseColor = evaluateDiffuseLighting();
vec3 specularColor = evaluateSpecularLighting();
return vec4(diffuseColor + specularColor, alpha);
}
When rendering glass the metallic level would be 0, so it's not pulling from the baseColor at all, and just using the specular level. This all makes sense and means specular highlights can still occur even on the transparent glass, as per the above quote. I don't want to just multiply by alpha at the end, since my baseColor texture would already be a premultiplied alpha image, such as something containing a decal. For example let's say I'm drawing a pane of glass with a sticker on it that's the letter 'A'.
My question is: for a pixel that has a specular highlight on a transparent portion of the glass, what should the alpha value be for compositing to work downstream? The alpha would be 0 according to the above code, which would make an Over operation later on behave poorly. I say this since a property of pre-multiplied alpha is that max(R,G,B) <= A, but this specular highlight would have max(R,G,B) > A. This would result in the over operation being additive between the specular highlight and the background it's being composited over, blowing out the final image.
Should I set the alpha to max(specular, alpha), to give a more useful alpha value for these pixels? Is there a more 'correct' way to handle this?
Thanks
The idea behind the pre-multiplied approach is we assume that the alpha/opacity only affects the diffuse (i.e. the light that goes inside the material/surface), whereas the specular is assumed to be unaffected (at least for the case of a dielectric, the light is simply reflected off the surface un-tinted).
In essence, you should have :
return vec4(diffuseColor * alpha + specularColor, alpha);
Where the alpha is 'pre-multiplied' with the diffuse colour, while the specular colour is left at full intensity (as it should).
Your blend mode will also need to follow suit : off the top of my head one/one for the source/destination colours since the colour will need to be rendered in an additive fashion.
Related
I want to implement gamma correction on my OpenGL 3D renderer, I understand that it's absolutely relevant on texture loaded in sRGB, so I do this:
vec4 texColor;
texColor = texture(src_tex_unit0, texCoordVarying);
texColor = vec4(pow(texColor.rgb,vec3(2.2)),1.0);
vec4 colorPreGamma = texColor * (vec4(ambient,1.0) + vec4(diffuse,1.0));
fragColor = vec4(pow(colorPreGamma.rgb, vec3(1.0/gamma)),1.0);
But my question is about solid color, when the surface of the 3D object I want lit is not textured but just colored by a per vertex RGB value. In this case, do I have to transform my color in Linear space, and after the lighting operation, transform back to gamma space like I do for a texture?
Does this apply when my light are colored?
In this case, do I have to transform my color in Linear space, and after the lighting operation, transform back to gamma space like I do for a texture?
That depends: what colorspace are your colors in?
You're not doing this correction because of where they come from; you're doing it because of what the colors actually are. If the value is not linear, then you must linearize it before using it, regardless of where it comes from.
You are ultimately responsible for putting that color there. So you must have to know whether that color is in linear RGB or sRGB colorspace. And if the color is not linear, then you have to linearize it before you can get meaningful numbers from it.
In OpenGL there isn't a huge distinction between color data and other kinds of data: if you have a vec3 you can access components as .xyz or .rgb. It's all just data.
So ask yourself this: "Do I have to gamma correct my vertex positions in the vertex shader?"
Of course not, because your vertex positions are already in linear space. So if you are similarly setting your vertex colors in linear space, again no gamma correction is needed.
In other words, do you imagine vec3(0.5, 0.5, 0.5) as being a gray that is visually halfway between black and white? Then you need gamma correction.
Do you imagine it as being mathematically halfway between black and white (in terms of measurable light intensity)? Then it's already linear.
I'm working on a deferred lighting technique in 2D, using a frame buffer to accumulate light sources using the GL_MAX blend equation.
Here's what I get when rendering one light source (the geometry is a quad without a texture, I'm only using a fragment shader for colouring) to my buffer:
Which is exactly what I want - attenuation from the light source. However, when two light sources are near each other, when they overlap, they seem to produce a lower RGB value where they meet, like so:
Why is there a darker line between the two? I was expecting that with GL_MAX blend equation they would smoothly blend into each other, using the maximal value of the fragments in each location.
Here's the setup for the FBO (using LibGDX):
Gdx.gl.glClearColor(0.14f, 0.14f, 0.19f, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
Gdx.gl.glBlendEquation(GLMAX_BLEND_EQUATION);
Gdx.gl.glBlendFunc(GL20.GL_SRC_COLOR, GL20.GL_DST_COLOR);
Gdx.gl.glEnable(GL20.GL_BLEND);
I don't think the call to glBlendFunc is actually necessary when using this equation. GLMAX_BLEND_EQUATION is set to 0x8008.
varying vec2 v_texCoords;
varying vec2 v_positionRelativeToLight;
uniform sampler2D u_texture;
uniform vec3 u_lightPosition;
uniform vec3 u_lightColor;
void main() {
float distanceToLight = length(v_positionRelativeToLight);
float falloffVarA = 0.1;
float falloffVarB = 1.0;
float attenuation = 1.0 / (1.0 + (falloffVarA*distanceToLight) + (falloffVarB*distanceToLight*distanceToLight));
float minDistanceOrAttenuation = min(attenuation, 1.0-distanceToLight);
float combined = minDistanceOrAttenuation * attenuation;
gl_FragColor = vec4(combined, combined, combined, 1.0);
}
There are extra variables passed in there as this fragment shader is usually more complicated, but I've cut it down to just show how the attenuation and blending is behaving.
This happens between every light source that I render where they meet - rather than the colour that I'm expecting, the meeting of two light sources - the equidistant point between the two quads, is a darker colour that I'm expecting. Any idea why and how to fix it?
This is the result of subtracting the first image from the second:
The background on the first isn't quite black, hence the yellowing on the right, but otherwise you can clearly see the black region on the left where original values were preserved, the darker arc where values from both lights were evaluated but the right was greater, then all the area on the right that the original light didn't touch.
I therefore think you're getting max-pick blending. But what you want is additive blending:
Gdx.gl.glBlendFunc(GL20.GL_ONE, GL20.GL_ONE);
... and leave the blend equation on the default of GL_FUNC_ADD.
Your result is the expected appearance for maximum blending (which is just like the lighten blend mode in Photoshop). The dark seam looks out of place perhaps because of the non-linear falloff of each light, but it's mathematically correct. If you introduce a light with a bright non-white color to it, it will look much more objectionable.
You can get around this if you render your lights to a frame buffer with inverted colors and multiplicative blending, and then render the frame buffer with inverted colors. Then the math works out to not have the seams, but it won't look unusually bright like what additive blending produces.
Use a pure white clear color on your frame buffer and then render the lights with the standard GL_ADD blend equation and the blend function GL_ONE_MINUS_DST_COLOR. Then render your FBO texture to the screen, inverting the colors again.
Two lights drawn using your method
Two lights drawn additively
Two lights, drawn sequentially with GL_ONE_MINUS_DST_COLOR, GL_ZERO and GL_ADD
The above result, inverted
I am writing a Phong lighting shader and I have a hard time deciding whether the value I pass to gl_FragColor should be normalized or not.
If I use normalized values, the lighting is a bit weird. For example an object far away from a light source (un-lighted) would have its color determined by the sum of the emissive component, ambient component and global ambiental light. Let us say that adds up to (0.3, 0.3, 0.3). The normal for this is roughly (0.57, 0.57, 0.57), which is quite more luminous than what I'm expecting.
However, if I use non-normalized values, for close objects the specular areas get really, really bright and I have to make sure I generally use low values for my material constants.
As a note, I am normalizing only the RGB component and the alpha component is always 1.
I am a bit swamped and I could not find anything related to this. Either that or my searches were completely wrong.
No. Normalizing the color creates an interesting effect, but I think you don't really want it most, if not all of the time.
Normalization of the color output causes loss of information, even though it may seem to provide greater detail to a scene in some cases. If all your fragments have their color normalized, it means that all RGB vectors have their norm equal to 1. This means that there are colors that simply cannot exist in your output: white (norm = sqrt(3)), bright colors such as yellow (norm = sqrt(2)), dark colors such as dark red (norm(0.5, 0.0, 0.0) = 0.5), etc. Another issue you may face is normalizing zero vectors (i.e. black).
Another way to understand why color normalization is wrong, think about the less general case of rendering a grayscale image. As there is only one color component, normalization does not really make sense at all as it would make all your colors 1.0.
The problem with using the values without normalization arises from the fact that your output image has to have its color values clamped to a fixed interval: [0, 255] or [0.0, 1.0]. As the specular parts of your object reflect more light than those that only reflect diffuse light, quite possibly the computed color value may excede even (1.0, 1.0, 1.0) and get clamped to white for most of the specular area, therefore these areas become, perhaps, too bright.
A simple solution would be to lower the material constant values, or the light intensity. You could go one step further and make sure that the values for the material constants and light intensity are chosen such that the computed color value cannot excede (1.0, 1.0, 1.0). The same result could be achieved with a simple division of the computed color value if consistent values are used for all the materials and all the lights in the scene, but it is kind of overkill, as the scene would probably be too dark.
The more complex, but better looking solution involves HDR rendering and exposure filters such as bloom to obtain more photo-realistic images. This basically means rendering the scene into a float buffer which can handle a greater range than the [0, 255] RGB buffer, then simulating the camera or human eye behavior of adapting to a certain light intensity and the image artefacts caused by this mechanism (i.e. bloom).
From graphics view, is material just the the images applying to geometric object?
We can define material as a set of data which describes how a surface reacts to light.
In the Phong-Blinn shading model, the material is defined by several sets of data:
(rgb) ambient - (see below)
(rgb) diffuse - how strongly it diffuses the incoming light of a given color
(rgb) specular - how well it reflects the incoming light of a given color
(number) shininess - how perfect (how small and focused) is the reflection. Bigger value = smaller "shining spot".
The ambient value is just added to the final color - it is there to emulate "secondary light reflections". It's usually set to the same hue as diffuse, but usually of smaller intensity.
By balancing ambient/diffuse/specular/shininess parameters, you may make the surface resemble different real-world materials.
Also, those parameters may be defined either per-vertex or per-pixel (as a texture). It is common to take the values of ambient and diffuse from a colourful texture and specify constant specular and shininess, but you also could have 3 different textures for ambient, diffuse and specular colours - in order to simulate a sophisticated material which reflects the light in different way depending on the position.
There might be more parameters involved depending on what effects you want to use, for example an additional value for glowing surfaces, etc.
Material usually refers to the colour of the geometric object, while the image is the texture. The material will specify how the object reacts to ambient and direct lighting, it's reflectance, transparency etc.
They can be combined in various ways to produce different effects.
For example the texture might completely override the material so that the underlying colour has no effect on the final scene.
In other cases the texture might be blended with the material so that the same texture effect can be applied to different objects (red car, blue car etc.).
This is a rather old problem I've had with an OpenGL application.
I have a rather complex model, some polygons in it are untextured and colored using a plain color with glColor() and others are textured. Some of the texture is the same color as the untextured polygons and there should be no visible seam between the two.
The problem is that when I turn up the ambient component of the light source, a seam between the two kinds of polygons emerge.
see this image: http://www.shiny.co.il/shooshx/colorBug2.png
The left image is without any ambient light and the right image is with ambient light of (0.2,0.2,0.2).
the RGB value of the color on the texture is identical to the RGB value of the colored faces. The textures alpha is set to 1.0 everywhere.
To shade the texture I use GL_MODULATE.
Can anyone think of a reason why that would happen and of a possible solution?
You mention that you set the color with glColor(), so I assume that GL_COLOR_MATERIAL is on? What setting do you use for glColorMaterial()? In this case it should be GL_AMBIENT_AND_DIFFUSE, so that the glColor() call affects the ambient color as well as the diffuse color. (This is the default.)
You could also try to set all material colours to white (with glMaterial()) before rendering the texture mapped faces. With some settings (don't remember which), the texture itself gets modulated by the current color.
Hope this helps or at least points you into a useful direction.