Use RGB texture as alpha values/Subpixel font rendering in OpenGL - opengl

Currently I'm using FreeType in subpixel mode and take the largest color of each pixel as alpha value, with the following fragment shader:
uniform sampler2D Image;
uniform vec4 Color;
smooth in vec2 vVaryingTexCoord;
out vec4 vFragColor;
void main(void){
vec4 color = texture(Image, vVaryingTexCoord);
vFragColor = color * Color;
}
This works fine for dark backgrounds, but on lighter ones the border pixels show (e.g. when a text pixel is (1,0,0)). To make it work with brighter backgrounds, I'd need to pass the background color and do the blending myself, which starts breaking down once I move to more complex backgrounds.
Is there a way to use the RGB values from FreeType as alpha values for a solid color (which is passed to the shader)? This formula basically, where b = background pixel, t = current text pixel, c = static color:
b*((1,1,1) - t) + t*c.rgb*c.a
I think drawing everything else first and passing that framebuffer to the font shader would work, but that seems a bit overkill. Is there a way doing this in the OpenGL blend stage? I tried playing around with glBlendFunc and such, but didn't get anywhere.

It's possible using Dual Source Blending, available since OpenGL 3.3. This spec draft even mentions subpixel rendering as use case. All that is needed to make it work:
glBlendFunc(GL_SRC1_COLOR, GL_ONE_MINUS_SRC1_COLOR);
(don't forget to enable GL_BLEND, it happens to me all the time :D)
Specify dual output in the fragment shader: (You can bind by name instead if you want, see spec)
layout(location = 0, index = 0) out vec4 color;
layout(location = 0, index = 1) out vec4 colorMask;
In main:
color = StaticColor;
colorMask = StaticColor.a*texel;
Where StaticColor is the global text color uniform, and texel is the current pixel value of the glyph.

Related

Render a texture (decal) with an alpha value?

I tried to draw two textures of decals and background, but only the alpha part of the decals becomes white.
I simply tried the following.
Draw 2 textures (background & decals)
Add glBlendFunc to apply decals alpha value
#version 330 core
in vec2 UV;
out vec3 color;
uniform sampler2D background;
in vec3 decalCoord;
uniform sampler2D decal;
void main(){
vec3 BGTex = texture( background, UV ).rgb;
vec3 DecalTex = texture(decal, decalCoord.xy).rgba;
color =
vec4(BGTex,1.0) + // Background texture is DDS DXT1 (I wonder if DXT1 is the cause?)
vec4(DecalTex,0.0); // Decal texture is DDS DXT3 for alpha
}
// Set below...
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glBlendEquation(GL_FUNC_ADD);
I was able to draw normally but there are the following problems.
  -The alpha part of the decals can not be made transparent.
Is this a problem that can be solved within the fragmentshader?
In that case, how should I rewrite the code?
The texture is DDS, one of which is DXT1 type (Background. Because this texture doesn't need alpha) and the other is DXT3 type (Decal). Is this also the cause?(Both need to be the DXT3 type?)
Also, should I look for another way to put on the decals?
DecalTex should be a vec4, not a vec3. Otherwise the alpha
value will not be stored.
You will also have to change the line at the end to: color =
vec4(BGTex, 1.0) + DecalTex * DecalTex.a As currently it sets the alpha component
to 0.
The DecalTex has an alpha channel. The alpha channel is "weight", which indicates the intensity of the DecalTex. If the alpha channel is 1, then the color of DecalTex has to be used, if the alpha channel is 0, then the color of BGTex has to be used.
Use mix to mix the color of BGTex and DecalTex dependent on the alpha channel of DecalTex. Of course the type of the DecalTex has to be vec4:
vec3 BGTex = texture( background, UV ).rgb;
vec4 DecalTex = texture(decal, decalCoord.xy).rgba;
color = vec4(mix(BGTex.rgb, DecalTex.rgb, DecalTex.a), 1.0);
Note, mix linear interpolates between the 2 values:
mix(x, y, a) = x * (1−a) + y * a
This is similar the operation which is performed by the blending function and equation:
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glBlendEquation(GL_FUNC_ADD);
But Blending is applied to the fragment shader output and the current value in the framebuffer.
You don't need any blending at all, because the textures are "blended" in the fragment shader and the result is put in the framebuffer. Since the alpha channel of the fragment shader output is >= 1.0, the output is completely "opaque".

OpenGL: Compute Shader - gl_GlobalInvocationID giving static output

So I have a Compute Shader that is supposed to take a texture and copy it over to another texture with slight modifications. I have confirmed that the textures are bound and that data can be written using RenderDoc which is a debugging tool for graphics. The issue I have is that inside the shader the variable gl_GlobalInvocationID, which is created by OpenGL, does not seem to work properly.
Here is my call of the compute shader: (The texture height is 480)
glDispatchCompute(1, this->m_texture_height, 1); //Call upon shader
glMemoryBarrier(GL_SHADER_IMAGE_ACCESS_BARRIER_BIT);
And then we have my compute shader here:
#version 440
#extension GL_ARB_compute_shader : enable
#extension GL_ARB_shader_image_load_store : enable
layout (rgba8, binding=0) uniform image2D texture_source0;
layout (rgba8, binding=1) uniform image2D texture_target0;
layout (local_size_x=640 , local_size_y=1 , local_size_z=1) in; //Local work-group size
void main() {
ivec2 txlPos; //A variable keeping track of where on the texture current texel is from
vec4 result; //A variable to store color
txlPos = ivec2(gl_GlobalInvocationID.xy);
//txlPos = ivec2( (gl_WorkGroupID * gl_WorkGroupSize + gl_LocalInvocationID).xy );
result = imageLoad(texture_source0, txlPos); //Get color value
barrier();
result = vec4(txlPos, 0.0, 1.0);
imageStore(texture_target0, txlPos, result); //Save color in target texture
}
When I run this the target texture becomes entirely yellow, save for a 1pxl thick green line along the left border and a 1pxl thick red line along the bottom border. My expectation is to see some sort of gradient given that a save txlPos as a colour value.
Am I somehow defining my work-groups wrong? I've tried splitting the gl_GlobalInvokationID up into its components but not managed to get any wiser fiddling with them.
A 8-bit floating point texture can only store values between 0 and 1. Since gl_GlobalInvocationID is in most cases larger than 1, it get's clamped to the maximum value of 1 which makes the texture yellow.
If you want to create a gradient in both directions, then you have to make sure that the values stored start at 0 and end at 1. One possiblity is to divide by the maximum:
result = vec4(vec2(gl_GlobalInvocationID.xy) / vec2(640, 480), 0, 1);

Extracting scene "bright parts" and flicker in OpenGL scene

I've written the following shader to perform a bright pass of my scene so I can extract luminance for later blurring as part of a "glow" effect.
// "Bright" pixel shader.
#version 420
uniform sampler2D Map_Diffuse;
uniform float uniform_Threshold;
in vec2 attrib_Fragment_Texture;
out vec4 Out_Colour;
void main(void)
{
vec3 luminances = vec3(0.2126, 0.7152, 0.0722);
vec4 texel = texture2D(Map_Diffuse, attrib_Fragment_Texture);
float luminance = dot(luminances, texel.rgb);
luminance = max(0.0, luminance - uniform_Threshold);
texel.rgb *= sign(luminance);
texel.a = 1.0;
Out_Colour = texel;
}
The bright areas are successfully extracted however there are some unstable features in the scene sometimes, resulting in pixels that flicker on and off for a while. When this is blurred the effect is more pronounced, with bits of glow kind-of flickering too. The artifacts occur in, for example, the third image in the screenshot I've posted, where the object is in shadow and so there's far less luminance in the scene. They're mostly present in transition from away to towards the light of course (during rotation of the object), where the edge is just hitting the light.
My question is to ask whether there's a way you can detect and mitigate this in the shader. Note that the bright pass is part of a general down-sample, from screen resolution to 512x512.
You could read the surrounding pixels also and do your math based on that.
Kind of like is done here.

Changing color of fragment

I have written a fragment shader which i would like to change the color of the fragment. for example I would like if the color it receives is black then it should change it to a blue.
This is the shader that I am using:
uniform sampler2D mytex;
layout (pixel_center_integer) in vec4 gl_FragCoord;
uniform sampler2D texture1;
void main ()
{
ivec2 screenpos = ivec2 (gl_FragCoord.xy);
vec4 color = texelFetch (mytex, screenpos, 0);
if (color == vec4 (0.0,0.0,0.0,1.0)) {
color = (0.0,0.0,0.0,0.0);
}
gl_FragColor = texture2D (texture1, gl_TexCoord[0].st);
}
And here is the log that I am getting from it:
WARNING: -1:65535: 'GL_ARB_explicit_attrib_location' : extension is not available in current GLSL version
WARNING: 0:1: 'texelFetch' : function is not available in current GLSL version
I am aware of the warning- but shouldn't it compile anyways?
The shader is not doing what i would like it to do, can someone explain why?
For one thing, you are using functions that are not available in your GLSL implementation. The result of calling these will be undefined.
However, the kicker here is that gl_FragColor has absolutely NOTHING to do with the value of color in this shader. So even if your texelFetch (...) logic actually did work correctly, changing the value of color does nothing to the final output. A smart compiler will see this as a no-op and effectively strip your shader down to this:
uniform sampler2D texture1;
void main ()
{
gl_FragColor = texture2D (texture1, gl_TexCoord[0].st);
}
If that were not enough, texelFetch (...) is completely unnecessary in this shader. If you want to lookup the texel that corresponds to the current fragment in your shader and the texture has the same dimensions as the viewport you are drawing into you can actually use texture2D (texture1, gl_FragCoord.xy); This is because the default behaviour in GLSL is to have gl_FragCoord supply the coordinate of the fragment's center (x+0.5, y+0.5) - this is also the center of the corresponding texel in your texture (if it is the same resolution), so you can do a traditional texture lookup without worrying that texture filtering will alter your sampled result.
texelFetch (...) lets you fetch an explicit texel in a texture without using normalized coordinates, it is sort of like a "grownup" rectangle texture :) It is generally useful if you are using a multisample texture and want a specific sample, or if you want to bypass texture filtering (which includes mipmap level selection). In this case, it is not needed at all.
This is probably what you really want (OpenGL 3.2):
#version 150
uniform sampler2D mytex;
uniform sampler2D texture1;
layout (location=0) out vec4 frag_color;
layout (location=1) out vec4 mytex_color;
void main ()
{
mytex_color = texture2D (mytex, gl_FragCoord.xy);
// This is not black->blue like you explained in your question...
// ... This is generally opaque->transparent, assuming 4th component = alpha
if (mytex_color == vec4 (0.0,0.0,0.0,1.0)) {
mytex_color = vec4 (0.0);
}
frag_color = texture2D (texture1, gl_TexCoord[0].st);
}
In older GLSL versions, you will have to use glBindFragDataLocation (...) and set the data locations manually or use gl_FragData[n] instead of out variables.
Now the real problem here is that you seem to be wanting to change the color of the texture you are sampling from. That will not work, at best you will have to use two fragment data outputs. Writing into the same texture you are sampling from can be done under some very controlled circumstances, but generally what you would do is ping-pong between textures. In other words, you would fetch from one texture, write to another texture and all subsequent render passes that reference to the original texture should be swapped with the one you just wrote to.
See "Fragment Data Location" for more information on Multiple Render Target drawing.

GLSL - put decal texture into base texture with color indication

I'm trying to write simple shader to put some "mark"(64*64) on base texture(128*128), to indicate where mark must be, i use cyan colored mark-sized(64*64) region on base texture.
becomes
Fragment shader
precision lowp float;
uniform sampler2D us_base_tex;
uniform sampler2D us_mark_tex;
varying vec2 vv_base_tex;
varying vec2 vv_mark_tex;
const vec4 c_mark_col = vec4(0.0, 1.0, 1.0, 1.0);//CYAN
void main()
{
vec4 base_col = texture2D(us_base_tex, vv_base_tex);
if(base_col == c_mark_col)
{
vec4 mark_col = texture2D(us_mark_tex, vv_mark_tex);//texelFetch magic overhere must be
base_col = mix(base_col, mark_col, mark_col.a);
}
gl_FragColor = base_col;
}
Of course, it not works as it should, i got something like this (transperity only for demonstration, there is no cyan region, only piece of "T"):
I try to figure it and only something like texelFetch will help me, but i can't figure out, how get tex coord of base texture cyan texel and converted it to get - first col/first row cyan base texel = first col/first row mark texel, second col/first row base = second col/first row of mark. e.t.c.
I think there's a way to do this in a single pass - but it involves using another texture that is capable of holding the information presented below. So you're going to increase your texture memory usage.
In this approach, the second texture (it can be generated by post-processing the original texture either offline or somehow) contains the UV map for the decal
R = normalized distance from left of cyan square
G = normalized distance from the top of the cyan square
B = don't care
Now the pixel shader is simple, all it needs to do is to see if the current texel is cyan, pick the R and G from the "decal-uvmap" texture and use those as texture coords to sample the decal texture.
Note that the bit depth of this texture (and it's size) is related to the size of the original texture so it may be possible to get away with a much smaller "decal-uvmap" texture than the original.