How can i blend diffuse texture with specular, which gL Blend function i should use and when.
My specular lighting is based on special texture which has transparency.
For now im just changing pixel brightness relative to alpha value and combine result with diffuse texture:
"texture(SH_MAP, TextureCoords).rgba * texture(SH_MAP, TextureCoords).a"
vec4 m_TotalColor = m_DiffuesColor + m_SpecularColor;
Is there a better way to achieve this using blending ?
Generally the equation with light falloff is (LightColor / DistanceSquared) * (DiffuseColor + Specular)
Addition is the correct way to blend the specular and diffuse colors, as long as it's getting multiplied by the light color at some point. Keep in mind when you are working with linear color and that you convert to gamma space after all operations.
Happy shading!
Related
I want to implement gamma correction on my OpenGL 3D renderer, I understand that it's absolutely relevant on texture loaded in sRGB, so I do this:
vec4 texColor;
texColor = texture(src_tex_unit0, texCoordVarying);
texColor = vec4(pow(texColor.rgb,vec3(2.2)),1.0);
vec4 colorPreGamma = texColor * (vec4(ambient,1.0) + vec4(diffuse,1.0));
fragColor = vec4(pow(colorPreGamma.rgb, vec3(1.0/gamma)),1.0);
But my question is about solid color, when the surface of the 3D object I want lit is not textured but just colored by a per vertex RGB value. In this case, do I have to transform my color in Linear space, and after the lighting operation, transform back to gamma space like I do for a texture?
Does this apply when my light are colored?
In this case, do I have to transform my color in Linear space, and after the lighting operation, transform back to gamma space like I do for a texture?
That depends: what colorspace are your colors in?
You're not doing this correction because of where they come from; you're doing it because of what the colors actually are. If the value is not linear, then you must linearize it before using it, regardless of where it comes from.
You are ultimately responsible for putting that color there. So you must have to know whether that color is in linear RGB or sRGB colorspace. And if the color is not linear, then you have to linearize it before you can get meaningful numbers from it.
In OpenGL there isn't a huge distinction between color data and other kinds of data: if you have a vec3 you can access components as .xyz or .rgb. It's all just data.
So ask yourself this: "Do I have to gamma correct my vertex positions in the vertex shader?"
Of course not, because your vertex positions are already in linear space. So if you are similarly setting your vertex colors in linear space, again no gamma correction is needed.
In other words, do you imagine vec3(0.5, 0.5, 0.5) as being a gray that is visually halfway between black and white? Then you need gamma correction.
Do you imagine it as being mathematically halfway between black and white (in terms of measurable light intensity)? Then it's already linear.
I'm working on a deferred lighting technique in 2D, using a frame buffer to accumulate light sources using the GL_MAX blend equation.
Here's what I get when rendering one light source (the geometry is a quad without a texture, I'm only using a fragment shader for colouring) to my buffer:
Which is exactly what I want - attenuation from the light source. However, when two light sources are near each other, when they overlap, they seem to produce a lower RGB value where they meet, like so:
Why is there a darker line between the two? I was expecting that with GL_MAX blend equation they would smoothly blend into each other, using the maximal value of the fragments in each location.
Here's the setup for the FBO (using LibGDX):
Gdx.gl.glClearColor(0.14f, 0.14f, 0.19f, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
Gdx.gl.glBlendEquation(GLMAX_BLEND_EQUATION);
Gdx.gl.glBlendFunc(GL20.GL_SRC_COLOR, GL20.GL_DST_COLOR);
Gdx.gl.glEnable(GL20.GL_BLEND);
I don't think the call to glBlendFunc is actually necessary when using this equation. GLMAX_BLEND_EQUATION is set to 0x8008.
varying vec2 v_texCoords;
varying vec2 v_positionRelativeToLight;
uniform sampler2D u_texture;
uniform vec3 u_lightPosition;
uniform vec3 u_lightColor;
void main() {
float distanceToLight = length(v_positionRelativeToLight);
float falloffVarA = 0.1;
float falloffVarB = 1.0;
float attenuation = 1.0 / (1.0 + (falloffVarA*distanceToLight) + (falloffVarB*distanceToLight*distanceToLight));
float minDistanceOrAttenuation = min(attenuation, 1.0-distanceToLight);
float combined = minDistanceOrAttenuation * attenuation;
gl_FragColor = vec4(combined, combined, combined, 1.0);
}
There are extra variables passed in there as this fragment shader is usually more complicated, but I've cut it down to just show how the attenuation and blending is behaving.
This happens between every light source that I render where they meet - rather than the colour that I'm expecting, the meeting of two light sources - the equidistant point between the two quads, is a darker colour that I'm expecting. Any idea why and how to fix it?
This is the result of subtracting the first image from the second:
The background on the first isn't quite black, hence the yellowing on the right, but otherwise you can clearly see the black region on the left where original values were preserved, the darker arc where values from both lights were evaluated but the right was greater, then all the area on the right that the original light didn't touch.
I therefore think you're getting max-pick blending. But what you want is additive blending:
Gdx.gl.glBlendFunc(GL20.GL_ONE, GL20.GL_ONE);
... and leave the blend equation on the default of GL_FUNC_ADD.
Your result is the expected appearance for maximum blending (which is just like the lighten blend mode in Photoshop). The dark seam looks out of place perhaps because of the non-linear falloff of each light, but it's mathematically correct. If you introduce a light with a bright non-white color to it, it will look much more objectionable.
You can get around this if you render your lights to a frame buffer with inverted colors and multiplicative blending, and then render the frame buffer with inverted colors. Then the math works out to not have the seams, but it won't look unusually bright like what additive blending produces.
Use a pure white clear color on your frame buffer and then render the lights with the standard GL_ADD blend equation and the blend function GL_ONE_MINUS_DST_COLOR. Then render your FBO texture to the screen, inverting the colors again.
Two lights drawn using your method
Two lights drawn additively
Two lights, drawn sequentially with GL_ONE_MINUS_DST_COLOR, GL_ZERO and GL_ADD
The above result, inverted
I am working with an old game format and am trying to finish up a rendering project. I am working on lighting and am trying to get a multiply blend mode going in my shader.
The lighting is provided as vertex values which are they interpolated over the triangle.
I have taken a screenshot of an unlit scene, and just the light maps and put them in Photoshop with a multiply layer. It provides exactly what I want.
I also want to factor in the ambient light which is kind of a like a 'Opacity' on the Photoshop color layer.
I have tried just multiplying them, which works great but again, I want to be able to control the amount of the lightmaps. I tried mix, which just blended the lightmaps and the textures but not in a multiply blend.
Here are the images. The first is the diffuse, the second is the lightmap and the third is them combined at 50% opacity with the lightmaps.
http://imgur.com/Zwg9IZr,6hq0t0p,7hR88I2#0 [1]
So, my question is, how do I multiply blend these with the sort of ambient light "opacity" factor. Again, I have tried a direct mix but it's more of an overlay, rather than a multiply blend.
My GLSL fragment source:
#version 120
uniform sampler2D texture;
varying vec2 outtexture; in vec4 colorout;
void main(void)
{
float ambient = 1.0f;
vec4 textureColor = texture2D(texture, outtexture);
vec4 lighting = colorout;
// here is where I want to blend with the ambient light dictating the contribution from each
gl_FragColor = textureColor * lighting;
}
I was wondering how you would go about darkening the overall brightness to a normal. I am using flat shading and am calculating surface normals. Would I just decrease the magnitude of the normal? I have tried subtracting from normal.x, normal.y, and normal.z but the effect is only darker when viewed from the correct angle.
If you want to darken the flat-shaded surface by tweaking just normals you should multiply them by factor:
less than 1 to make them darker (less responsive to light)
more than 1 to make them more responsive to light (lighter only when lit)
However this is not going to work in all cases (e.g. smooth shading) because there normals get interpolated and normalized within smooth shading calculations.
Yet I strongly recommend to leave normals normalized.
Proper solution would be to tweak your ambient and diffuse lights brightness and/or your objects materials ambient and diffuse components.
How would I implement loading a texture to be used as a specular map for a piece of geometry and rendering it in Directx9 using C++?
Are there any tutorials or basic examples I can refer to?
Use D3DXCreateTextureFromFile to load the file from disk. You then need to set up a shader that multiplies the specular value by the value stored in the texture. This gives you the specular colour.
So you're final pixel comes from
Final = ambient + (N.L * texture colour) + (N.H * texture specular)
You can do this easily in a shader.
Its also worth noting that it can be very useful to store the per texel specular in the alpha channel of the texture. This way you only need one texture around, though it does break per-pixel transparency.