I tried to draw two textures of decals and background, but only the alpha part of the decals becomes white.
I simply tried the following.
Draw 2 textures (background & decals)
Add glBlendFunc to apply decals alpha value
#version 330 core
in vec2 UV;
out vec3 color;
uniform sampler2D background;
in vec3 decalCoord;
uniform sampler2D decal;
void main(){
vec3 BGTex = texture( background, UV ).rgb;
vec3 DecalTex = texture(decal, decalCoord.xy).rgba;
color =
vec4(BGTex,1.0) + // Background texture is DDS DXT1 (I wonder if DXT1 is the cause?)
vec4(DecalTex,0.0); // Decal texture is DDS DXT3 for alpha
}
// Set below...
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glBlendEquation(GL_FUNC_ADD);
I was able to draw normally but there are the following problems.
-The alpha part of the decals can not be made transparent.
Is this a problem that can be solved within the fragmentshader?
In that case, how should I rewrite the code?
The texture is DDS, one of which is DXT1 type (Background. Because this texture doesn't need alpha) and the other is DXT3 type (Decal). Is this also the cause?(Both need to be the DXT3 type?)
Also, should I look for another way to put on the decals?
DecalTex should be a vec4, not a vec3. Otherwise the alpha
value will not be stored.
You will also have to change the line at the end to: color =
vec4(BGTex, 1.0) + DecalTex * DecalTex.a As currently it sets the alpha component
to 0.
The DecalTex has an alpha channel. The alpha channel is "weight", which indicates the intensity of the DecalTex. If the alpha channel is 1, then the color of DecalTex has to be used, if the alpha channel is 0, then the color of BGTex has to be used.
Use mix to mix the color of BGTex and DecalTex dependent on the alpha channel of DecalTex. Of course the type of the DecalTex has to be vec4:
vec3 BGTex = texture( background, UV ).rgb;
vec4 DecalTex = texture(decal, decalCoord.xy).rgba;
color = vec4(mix(BGTex.rgb, DecalTex.rgb, DecalTex.a), 1.0);
Note, mix linear interpolates between the 2 values:
mix(x, y, a) = x * (1−a) + y * a
This is similar the operation which is performed by the blending function and equation:
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glBlendEquation(GL_FUNC_ADD);
But Blending is applied to the fragment shader output and the current value in the framebuffer.
You don't need any blending at all, because the textures are "blended" in the fragment shader and the result is put in the framebuffer. Since the alpha channel of the fragment shader output is >= 1.0, the output is completely "opaque".
Related
These are my blending functions.
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glBlendEquation(GL_FUNC_ADD);
I mix two textures in the fragment shader using mix function.
FragColor =
mix(texture(texture1, TexCoord), texture(texturecubeMap, ransformedR), MixValueVariables.CubeMapValue ) *
vec4(result, material.opacity / 100.0); // multiplying texture with material value
Despite changing the blend Equation or BlendFunc types i don't see any difference in the output.
Does mix function works with the blend types.
Does mix function works with the blend types.
No. Blending and the glsl function mix are completely different things.
The function mix(x, y, a) calculates always x * (1−a) + y * a.
Blending takes the fragment color outputs from the Fragment Shader and combines them with the colors in the color buffer of the frame buffer.
If you want to use Blending, then you've to
draw the geometry with the first texture
activate blending
draw the geometry with the second texture
If you just want to blend the textures in the fragment shader, then you've to use the glsl arithmetic. See GLSL Programming/Vector and Matrix Operations
e.g.
vec4 MyBlend(vec4 source, vec4 dest, float alpha)
{
return source * alpha + dest * (1.0-alpha);
}
void main()
{
vec4 c1 = texture(texture1, TexCoord);
vec4 c2 = texture(texturecubeMap, ransformedR);
float f = MixValueVariables.CubeMapValue
FragColor = MyBlend(c1, c2, f) * vec4(result, material.opacity / 100.0);
}
Adapt the function MyBlend to your needs.
Currently I'm using FreeType in subpixel mode and take the largest color of each pixel as alpha value, with the following fragment shader:
uniform sampler2D Image;
uniform vec4 Color;
smooth in vec2 vVaryingTexCoord;
out vec4 vFragColor;
void main(void){
vec4 color = texture(Image, vVaryingTexCoord);
vFragColor = color * Color;
}
This works fine for dark backgrounds, but on lighter ones the border pixels show (e.g. when a text pixel is (1,0,0)). To make it work with brighter backgrounds, I'd need to pass the background color and do the blending myself, which starts breaking down once I move to more complex backgrounds.
Is there a way to use the RGB values from FreeType as alpha values for a solid color (which is passed to the shader)? This formula basically, where b = background pixel, t = current text pixel, c = static color:
b*((1,1,1) - t) + t*c.rgb*c.a
I think drawing everything else first and passing that framebuffer to the font shader would work, but that seems a bit overkill. Is there a way doing this in the OpenGL blend stage? I tried playing around with glBlendFunc and such, but didn't get anywhere.
It's possible using Dual Source Blending, available since OpenGL 3.3. This spec draft even mentions subpixel rendering as use case. All that is needed to make it work:
glBlendFunc(GL_SRC1_COLOR, GL_ONE_MINUS_SRC1_COLOR);
(don't forget to enable GL_BLEND, it happens to me all the time :D)
Specify dual output in the fragment shader: (You can bind by name instead if you want, see spec)
layout(location = 0, index = 0) out vec4 color;
layout(location = 0, index = 1) out vec4 colorMask;
In main:
color = StaticColor;
colorMask = StaticColor.a*texel;
Where StaticColor is the global text color uniform, and texel is the current pixel value of the glyph.
Recently, I have read article about sun shader (XNA Sun Shader) and decided to implement it using OpenGL ES 2.0. But I faced with a problem connected with shader:
I have two textures, one of them is fire gradient texture:
And another one is texture each white part of which must be colored by the first texture:
So, I'm going to have a result like below (do not pay attention that the result texture is rendered on sphere mesh):
I really hope that somebody knows how to implement this shader.
You can first sampling the original texture, if the color is white, then sampling the gradient texture.
uniform sampler2D Texture0; // original texture
uniform sampler2D Texture1; // gradient texture
varying vec2 texCoord;
void main(void)
{
gl_FragColor = texture2D( Texture0, texCoord );
// If the color in original texture is white
// use the color in gradient texture.
if (gl_FragColor == vec4(1.0, 1.0, 1.0,1.0)) {
gl_FragColor = texture2D( Texture1, texCoord );
}
}
I have a 2d texture that I loaded with
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, gs.width(), gs.height(), 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, gs.buffer());
where gs is an object that with methods that return the proper types.
In the fragment shader I sample from the texture and attempt to use that as the alpha channel for the resultant color. If I use the sampled value for other channels in the output texture it produces what I would expect. Any value that I use for the alpha channel appears to be ignored, because it always draws Color.
I am clearing the screen using:
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
Can anyone suggest what I might be doing wrong? I am getting an OpenGL 4.0 context with 8 red, 8 green, 8 blue, and 8 alpha bits.
Vertex Shader:
#version 150
in vec2 position;
in vec3 color;
in vec2 texcoord;
out vec3 Color;
out vec2 Texcoord;
void main()
{
Texcoord = texcoord;
Color = color;
gl_Position = vec4(position, 0.0, 1.0);
}
Fragment Shader:
#version 150
in vec3 Color;
in vec2 Texcoord;
out vec4 outColor;
uniform sampler2D tex;
void main()
{
float t = texture(tex, Texcoord);
outColor = vec4(Color, t);
}
Frankly, I am surprised this actually works. texture (...) returns a vec4 (unless you are using a shadow/integer sampler, which you are not). You really ought to be swizzling that texture down to just a single component if you intend to store it in a float.
I am guessing you want the alpha component of your texture, but who honestly knows -- try this instead:
float t = texture (tex, Texcoord).a; // Get the alpha channel of your texture
A half-way decent GLSL compiler would warn/error you for doing what you are trying to do right now. I suspect yours is as well, but you are not checking the shader info log when you compile your shader.
Update:
The original answer did not even begin to address the madness you are doing with your GL_DEPTH_COMPONENT internal format texture. I completely missed that because the code did not fit on screen.
Why are you using gs.rgba() to pass data to a texture whose internal and pixel transfer format is exactly 1 component? Also, if you intend to use a depth texture in your shader then the reason it is always returning a=1.0 is actually very simple:
Beginning with GLSL 1.30, when sampled using texture (...), depth textures are automatically setup to return the following vec4:
vec4 (r, r, r, 1.0).
The RGB components are replaced with the value of R (the floating-point depth), and A is replaced with a constant value of 1.0.
Your issue is that you're only passing in a vec3 when you need a vec4. RGBA - 4 components, not just three.
I have a system which allows me to set different blending modes (Those found in Photoshop) for each renderable object.Currently what I do is :
Render the renderable Object into FBO B normally.
Attach blending mode shader program,FBO C , and blend color attachment from FBO B with color attachment from FBO A (FBO A contains previous draws final result).
Blit the result from FBO C into FBO A and proceed with the rest of pipeline .
While this works fine , I would like to spare some frame rate which is currently wasted for this ping pong.I know that by default it is not possible to read pixels at the same time when writing to them so it is not possible to set a texture to read from and write to? Ideally ,What I would like to do is in the stage 1 render geometry right into FBO A processing the blend between FBO A color attachment texture and input material texture.
To make it clear here is the example.
Let's assume all the previously rendered geometry is accumulated in FBO A.And each new rendered object that needs to get blended is rendered into FBO B (just like I wrote above).Then in the blend pass (drawn into FBO C) the following shader is used (here it is darken blending ) :
uniform sampler2D bottomSampler;
uniform sampler2D topSampler;
uniform float Opacity;
// utility function that assumes NON-pre-multiplied RGB...
vec4 final_mix(
vec4 NewColor,
vec4 BaseColor,
vec4 BlendColor
) {
float A2 = BlendColor.a * Opacity;
vec3 mixRGB = A2 * NewColor.rgb;
mixRGB += ((1.0-A2) * BaseColor.rgb);
return vec4(mixRGB,BaseColor.a+BlendColor.a);
}
void main(void) // fragment
{
vec4 botColor = texture2D(bottomSampler,gl_TexCoord[0].st);
vec4 topColor = texture2D(topSampler,gl_TexCoord[0].st);
vec4 comp = final_mix(min(botColor,topColor),botColor,topColor);
gl_FragColor = comp;
}
Here :
uniform sampler2D bottomSampler; - FBO A texture attachment.
uniform sampler2D topSampler; -FBO B texture attachment
I use only plane geometry objects.
The output from this shader is FBO C texture attachment which is blitted into FBO A for the next evolution.