I have a terrain in OpenGL, and two textures which I am combining using GLSL mix() function.
Here are the textures I am using.
Now I am able to combine and mix these two textures, but for some reason, when I render the textures on the terrain, the terrain becomes transparent.
I render the LHS texture first, and then I render the RHS texture with alpha channel, I don't understand why it is transparent.
Here is an interesting fact, in the screenshot, you can see the result of the terrain when rendered on Nvidia GPU, when I render the same thing on interl HD 3k, I get different result.
This result is how it is supposed to be, nothing is transparent in this following screenshot.
Here is my fragment shader code..
void main()
{
vec4 dryTex = texture( u_dryTex, vs_texCoord *1 );
vec4 grassTex = texture( u_grassTex, vs_texCoord *1 );
vec4 texColor1= mix(dryTex, grassTex , grassTex.a);
out_fragColor = texColor1;
}
Looks like you're interpolating the alpha channel as well. Component-wise, the mix does:
texColor1.r = dryTex.r*(1-grassTex.a) + grassTex.r*grassTex.a
texColor1.g = dryTex.g*(1-grassTex.a) + grassTex.g*grassTex.a
texColor1.b = dryTex.b*(1-grassTex.a) + grassTex.b*grassTex.a
texColor1.a = dryTex.a*(1-grassTex.a) + grassTex.a*grassTex.a
The alpha channel for an opaque dryTex will thus be
grassTex.a^2
which is transparent most of the time.
Edit:
The fix would be:
void main()
{
vec4 dryTex = texture( u_dryTex, vs_texCoord *1 );
vec4 grassTex = texture( u_grassTex, vs_texCoord *1 );
vec4 texColor1= vec4(mix(dryTex.rgb, grassTex.rgb, grassTex.a), 1);
out_fragColor = texColor1;
}
Related
I tried to draw two textures of decals and background, but only the alpha part of the decals becomes white.
I simply tried the following.
Draw 2 textures (background & decals)
Add glBlendFunc to apply decals alpha value
#version 330 core
in vec2 UV;
out vec3 color;
uniform sampler2D background;
in vec3 decalCoord;
uniform sampler2D decal;
void main(){
vec3 BGTex = texture( background, UV ).rgb;
vec3 DecalTex = texture(decal, decalCoord.xy).rgba;
color =
vec4(BGTex,1.0) + // Background texture is DDS DXT1 (I wonder if DXT1 is the cause?)
vec4(DecalTex,0.0); // Decal texture is DDS DXT3 for alpha
}
// Set below...
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glBlendEquation(GL_FUNC_ADD);
I was able to draw normally but there are the following problems.
-The alpha part of the decals can not be made transparent.
Is this a problem that can be solved within the fragmentshader?
In that case, how should I rewrite the code?
The texture is DDS, one of which is DXT1 type (Background. Because this texture doesn't need alpha) and the other is DXT3 type (Decal). Is this also the cause?(Both need to be the DXT3 type?)
Also, should I look for another way to put on the decals?
DecalTex should be a vec4, not a vec3. Otherwise the alpha
value will not be stored.
You will also have to change the line at the end to: color =
vec4(BGTex, 1.0) + DecalTex * DecalTex.a As currently it sets the alpha component
to 0.
The DecalTex has an alpha channel. The alpha channel is "weight", which indicates the intensity of the DecalTex. If the alpha channel is 1, then the color of DecalTex has to be used, if the alpha channel is 0, then the color of BGTex has to be used.
Use mix to mix the color of BGTex and DecalTex dependent on the alpha channel of DecalTex. Of course the type of the DecalTex has to be vec4:
vec3 BGTex = texture( background, UV ).rgb;
vec4 DecalTex = texture(decal, decalCoord.xy).rgba;
color = vec4(mix(BGTex.rgb, DecalTex.rgb, DecalTex.a), 1.0);
Note, mix linear interpolates between the 2 values:
mix(x, y, a) = x * (1−a) + y * a
This is similar the operation which is performed by the blending function and equation:
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glBlendEquation(GL_FUNC_ADD);
But Blending is applied to the fragment shader output and the current value in the framebuffer.
You don't need any blending at all, because the textures are "blended" in the fragment shader and the result is put in the framebuffer. Since the alpha channel of the fragment shader output is >= 1.0, the output is completely "opaque".
I am trying to learn how to use shaders and use GLSL. One of the shaders is working but is distorting the texture of the sprite it's working on. I'm doing this all on SFML.
Distorted texture on left, actual texture on right:
The problem comes from this line
When I started the texture was being rendered upside down but subtracting the y component of the cordinates from 1 fixed that issue. The line that is causing the issue is
vec2 texCoord = (gl_FragCoord.xy / sourceSize.xy);
Where the sourceSize is a uniform passing in the resolution of something as a vec2. I've been passing in various values into this and getting different distorted versions of the texture. I was wondering if there was a way a ratio to pass in or something to avoid this distortion.
Texture Size in Pixels: 512x512
Passed in values for the above image: 512x512
Shader
uniform sampler2D source;
uniform vec2 sourceSize;
uniform float time;
void main( void )
{
vec2 texCoord = (gl_FragCoord.xy / sourceSize.xy); //Gets the pixel position in a range of 0.0 to 1
texCoord = vec2 (texCoord.x,1.0-texCoord.y);//Inverts the y co ordinate
vec4 Color = texture2D(source, texCoord);//Gets the current pixture colour
gl_FragColor = Color;//Output
}
Found a solution. Posting it here for if other need the help.
Changing
vec4 Color = texture2D(source, texCoord);//Gets the current pixture colour
To
vec4 Color = texture2D(source, gl_TexCoord[0].xy);//Gets the current pixture colour
Will fix the distortion effect.
There are two textures that I have to display and they overlap. one of the textures have alpha channel, so it is possible to blend it. However, since the texture coordinates are clumped together, I decided to scale them. gl_TexCoord[0] = gl_MultiTexCoord0*2.0;
This does not quite work, because, it sclaes the alpha channel as well. How do I scale the texture coordinates, but keep the alpha channel value the same?
Below are the GLSL shaders.
vertex shader:
void main()
{
gl_TexCoord[0] = gl_MultiTexCoord0*2.0; //this is where I scale the texture
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
fragment shader:
uniform sampler2D textureSample_0;
uniform sampler2D textureSample_1;
void main()
{
vec4 grass=texture2D(textureSample_1, gl_TexCoord[0].st);
vec4 sand = texture2D(textureSample_0, gl_TexCoord[0].st);
gl_FragColor = grass*grass.a+sand*(1.0-grass.a); //alpha channel is also scaled here
}
Any help would be appreciated.
I have a system which allows me to set different blending modes (Those found in Photoshop) for each renderable object.Currently what I do is :
Render the renderable Object into FBO B normally.
Attach blending mode shader program,FBO C , and blend color attachment from FBO B with color attachment from FBO A (FBO A contains previous draws final result).
Blit the result from FBO C into FBO A and proceed with the rest of pipeline .
While this works fine , I would like to spare some frame rate which is currently wasted for this ping pong.I know that by default it is not possible to read pixels at the same time when writing to them so it is not possible to set a texture to read from and write to? Ideally ,What I would like to do is in the stage 1 render geometry right into FBO A processing the blend between FBO A color attachment texture and input material texture.
To make it clear here is the example.
Let's assume all the previously rendered geometry is accumulated in FBO A.And each new rendered object that needs to get blended is rendered into FBO B (just like I wrote above).Then in the blend pass (drawn into FBO C) the following shader is used (here it is darken blending ) :
uniform sampler2D bottomSampler;
uniform sampler2D topSampler;
uniform float Opacity;
// utility function that assumes NON-pre-multiplied RGB...
vec4 final_mix(
vec4 NewColor,
vec4 BaseColor,
vec4 BlendColor
) {
float A2 = BlendColor.a * Opacity;
vec3 mixRGB = A2 * NewColor.rgb;
mixRGB += ((1.0-A2) * BaseColor.rgb);
return vec4(mixRGB,BaseColor.a+BlendColor.a);
}
void main(void) // fragment
{
vec4 botColor = texture2D(bottomSampler,gl_TexCoord[0].st);
vec4 topColor = texture2D(topSampler,gl_TexCoord[0].st);
vec4 comp = final_mix(min(botColor,topColor),botColor,topColor);
gl_FragColor = comp;
}
Here :
uniform sampler2D bottomSampler; - FBO A texture attachment.
uniform sampler2D topSampler; -FBO B texture attachment
I use only plane geometry objects.
The output from this shader is FBO C texture attachment which is blitted into FBO A for the next evolution.
I have a radial blur shader in GLSL, which takes a texture, applies a radial blur to it and renders the result to the screen. This works very well, so far.
The problem is, that this applies the radial blur to the first texture in the scene. But what I actually want to do, is to apply this blur to the whole scene.
What is the best way to achieve this functionality? Can I do this with only shaders, or do I have to render the scene to a texture first (in OpenGL) and then pass this texture to the shader for further processing?
// Vertex shader
varying vec2 uv;
void main(void)
{
gl_Position = vec4( gl_Vertex.xy, 0.0, 1.0 );
gl_Position = sign( gl_Position );
uv = (vec2( gl_Position.x, - gl_Position.y ) + vec2(1.0) ) / vec2(2.0);
}
// Fragment shader
uniform sampler2D tex;
varying vec2 uv;
const float sampleDist = 1.0;
const float sampleStrength = 2.2;
void main(void)
{
float samples[10];
samples[0] = -0.08;
samples[1] = -0.05;
samples[2] = -0.03;
samples[3] = -0.02;
samples[4] = -0.01;
samples[5] = 0.01;
samples[6] = 0.02;
samples[7] = 0.03;
samples[8] = 0.05;
samples[9] = 0.08;
vec2 dir = 0.5 - uv;
float dist = sqrt(dir.x*dir.x + dir.y*dir.y);
dir = dir/dist;
vec4 color = texture2D(tex,uv);
vec4 sum = color;
for (int i = 0; i < 10; i++)
sum += texture2D( tex, uv + dir * samples[i] * sampleDist );
sum *= 1.0/11.0;
float t = dist * sampleStrength;
t = clamp( t ,0.0,1.0);
gl_FragColor = mix( color, sum, t );
}
This basically is called "post-processing" because you're applying an effect (here: radial blur) to the whole scene after it's rendered.
So yes, you're right: the good way for post-processing is to:
create a screen-sized NPOT texture (GL_TEXTURE_RECTANGLE),
create a FBO, attach the texture to it
set this FBO to active, render the scene
disable the FBO, draw a full-screen quad with the FBO's texture.
As for the "why", the reason is simple: the scene is rendered in parallel (the fragment shader is executed independently for many pixels). In order to do radial blur for pixel (x,y), you first need to know the pre-blur pixel values of the surrounding pixels. And those are not available in the first pass, because they are only being rendered in the meantime.
Therefore, you must apply the radial blur only after the whole scene is rendered and fragment shader for fragment (x,y) is able to read any pixel from the scene. This is the reason why you need 2 rendering stages for that.