I'm trying to access a DepthComponent Texture in my GLSL Shader of version 400.
The program does a two pass rendering. In the first pass I render all the geometry and colors to a Framebuffer on which I have a ColorAttachment and DepthAttachment. The DepthAttachment is bound like this:
(Note: I'm using C# with OpenTK, which is strongly typed, in my code examples.)
GL.FramebufferTexture2D(FramebufferTarget.Framebuffer, FramebufferAttachment.DepthAttachment, TextureTarget.Texture2D, depthTexture.ID, 0);
The depth Texture has an internal pixel format of DepthComponent32f, pixel format of DepthComponent and Float as pixel type. All the other properties have default values.
The second pass renders the framebuffers color image onto the screen using the following shader:
#version 400
uniform sampler2D finalImage;
in vec2 texCoords;
out vec4 fragColor;
void main(){
fragColor = vec4(texture2D(finalImage, texCoords.xy).rgb, 1.0);
}
But now I want to read the depth Texture(DepthComponent) instead of the color Texture(RGBA).
I tried a lot of things like disabling TextureCompareMode, using shadow2DSampler with shadow2DProj(sampler, vec4(texCoords.xy, 0.0, 1.0)) or just textureProj(sampler, vec3(texCoords.xy, 0.0)). But it returns only 1 or 0, depends on which configuration I use.
To be sure that my depth Texture is ok, I've read the pixels back to a float array like this:
GL.ReadPixels(0, 0, depthTexture.Width, depthTexture.Height, PixelFormat.DepthComponent, PixelType.Float, float_array);
Everything seems to be correct, its showing me 1.0 for empty space and values between 0.99 and 1.0 for visible objects.
Edit
Here is a code example how my process looks like:
Init code
depthTexture= new GLEXTexture2D(width, height);
depthTexture.TextureCompareMode = TextureCompareMode.None;
depthTexture.CreateMutable(PixelInternalFormat.DepthComponent32f, PixelFormat.DepthComponent, PixelType.Float);
***CreateMutable Function***
ReserveTextureID();
GLEX.glBeginTexture2D(ID);
GL.TexImage2D(TextureTarget.Texture2D, 0, pInternalFormat, width, height, 0, pFormat, pType, IntPtr.Zero);
ApplyOptions();
MarkReserved(true);
GLEX.glEndTexture2D();
(Framebuffer attachment mentioned above)
Render pass 1
GL.BindFramebuffer(FramebufferTarget.Framebuffer, drawBuffer.ID);
GL.Viewport(0, 0, depthTexture.Width, depthTexture.Height);
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit | ClearBufferMask.StencilBufferBit);
GL.Enable(EnableCap.DepthTest);
GL.ClearColor(Color.Gray);
GL.UseProgram(geometryPassShader.ID);
geometry_shaderUniformMVPM.SetValueMat4(false, geometryImageMVMatrix * geometryImageProjMatrix);
testRectangle.Render(PrimitiveType.QuadStrip);
GL.UseProgram(0);
GL.BindFramebuffer(FramebufferTarget.Framebuffer, 0);
Render pass 2
GL.Viewport(0, 0, depthTexture.Width, depthTexture.Height);
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit | ClearBufferMask.StencilBufferBit);
GL.ClearColor(Color.White);
GL.UseProgram(finalImageShader.ID);
GL.ActiveTexture(TextureUnit.Texture0);
depthTexture.Bind();
final_shaderUniformMVPM.SetValueMat4(false, finalImageMatrix);
screenQuad.Render(PrimitiveType.Quads);
GL.UseProgram(0);
GL.BindTexture(TextureTarget.Texture2D, 0);
A few hours later I found the solution.
The Problem was the MinFilter. Like the khronos group said on glTexParameter:
The initial value of GL_TEXTURE_MIN_FILTER is GL_NEAREST_MIPMAP_LINEAR.
I changed the MinFilter of my depth Texture to GL_NEAREST (where GL_LINEAR is also legal) and now the depth values in the GLSL shader are right (after linearization of course).
Additional Info:
There are some extensions for MagFilter like LINEAR_DETAIL_ALPHA_SGIS. I`ve tried some of these, the depth value correctness was not affected.
Related
you can skip to the TL;DR at the bottom for the conclusion. I preferred to provide as much information as I could, so as to help narrow down the question further.
I've been having an issue with a heat haze effect I've been working on.
This is the sort of effect that I was thinking of but since this is a rather generalized system it would apply to any so called screen space refraction:
The haze effect is not where my issue lies as it is just a distortion of sampling coordinates, rather it's with what is sampled. My first approach was to render the distortions to another render target. This method was fairly successful but has a major downfall that's easy to foresee if you've dealt with screen space textures before. the problem is that because of the offset to the sampling coordinate, if an object is in front of the refractor, its edges will be taken into the refraction calculation.
as you can see it looks fine when all the geometry is either the environment (no depth test) or back geometry. and here with a cube closer than the refractor. As you can see it, there is this effect I'll call bleeding of the closer geometry.
relevant shader code for reference:
/* transparency.frag */
layout (location = 0) out vec4 out_color; // frag color
layout (location = 1) out vec4 bright; // used for bloom effect
layout (location = 2) out vec4 deform; // deform buffer
[...]
void main(void) {
[...]
vec2 n = __sample_noise_texture_with_time__{};
deform = vec4(n * .1, 0, 1);
out_color = vec4(0, 0, 0, .0);
bright = vec4(0.0, 0.0, 0.0, .9);
}
/* post_process.frag */
in vec2 texel;
uniform sampler2D screen_t;
uniform sampler2D depth_t;
uniform sampler2D bright_t;
uniform sampler2D deform_t;
[...]
void main(void) {
[...]
vec3 noise_sample = texture(deform_t, texel).xyz;
vec2 texel_c = texel + noise_sample.xy;
[sample screen and bloom with texel_c, gama corect, output to color buffer]
}
To try to combat this, I tried a technique that involved comparing depth components. to do this, i made the transparent object write its frag_depth tp the z component of my deform buffer like so
/* transparency.frag */
[...]
deform = vec4(n * .1, gl_FragCoord.z, 1);
[...]
and then to determine what is in front of what a quick check in the post processing shader.
[...]
float dist = texture(depth_t, texel_c).x;
float dist1 = noise_sample.z; // what i wrote to the deform buffer z
if (dist + .01 < dist1) { /* do something liek draw debug */ }
[...]
this worked somewhat but broke down as i moved away, even i i linearized the depth values and compared the distances.
EDIT 3: added better screenshots for the depth test phase
(In yellow where it's sampling something that's in front, couldn't be bothered to make it render the polygons as well so i drew them in)
(and here demonstrating it partially failing the depth comparison test from further away)
I also had some 'fun' with another technique where i passed the color buffer directly to the transparency shader and had it output the sample to its color output. In theory if the scene is Z sorted, this should produce the desired result. i'll let you be the judge of that.
(I have a few guesses as to what the patterns that emerge are since they are similar to the rasterisation patterns of GPUs however that's not very relevant sine that 'solution' was more of a desperation effort than anything)
TL;DR and Formal Question: I've had a go at a few techniques based on my knowledge and haven't been able to find much literature on the subject. so my question is: How do you realize sch effects as heat haze/distortion (that do not cover the whole screen might i add) and is there literature on the subject. For reference to what sort of effect I would be looking at, see my Overwatch screenshot and all other similar effects in the game.
Thought I would also mention just for completeness sake I'm running OpenGL 4.5 (on windows) with most shaders being version 4.00, and am working with a custom engine.
EDIT: If you want information about the software part of the engine feel free to ask. I didn't include any because it I didn't deem it relevant however i'd be glad to provide specs and code snippets as well as more shaders on demand.
EDIT 2: I thought i'd also mention that this could be achieved by using a second render pass and a clipping plane however, that would be costly and feels unnecessary since the viewpoint is the same. It might be that's this is the only solution but i don't believe so.
Thanks for your answers in advance!
I think the issue is you are trying to distort something that's behind an occluded object and that information is not available any more, because the object in front have overwitten the color value there. So you can't distort in information from a color buffer that does not exist anymore.
You are trying to solve it by depth testing and skipping the pixels that belong to an object closer to the camera than your transparent heat object, but this is causing the edge to leak into the distortion. Even if you get the edge skipped, if there was an object right behind the transparent object, occluded by the cube in the front, it wont distort in because the color information is not available.
Additional Render Pass
As you mention additional rendering pass with a clipping plane is certainly one solution to this problem.
Multiple render targets
Another solution similar to that would be to use multiple render targets, render the depth of the transparent object before hand, test for fragments that are behind it, and render them to another color buffer. Later use this buffer to distort instead of the full color buffer. You could also consider deffered shading.
Here is a code snippet of how you would setup multiple render targets.
//create your fbo
GLuint fboID;
glGenFramebuffers(1, &fboID);
glBindFramebuffer(GL_FRAMEBUFFER, fboID);
//create the rbo for depth
GLuint rboID;
glGenRenderbuffers(1, &rboID);
glBindRenderbuffer(GL_RENDERBUFFER, &rboID);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT32, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rboID);
//create two color textures (one for distort)
Gluint colorTexture, distortcolorTexture;
glGenTextures(1, &colorTexture);
glGenTextures(1, &distortcolorTexture);
glBindTexture(GL_TEXTURE_2D, colorTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glBindTexture(GL_TEXTURE_2D, distortcolorTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
//attach both textures
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, colorTexture, 0);
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, distortcolorTexture, 0);
//specify both the draw buffers
GLenum drawBuffers[2] = {GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1};
glDrawBuffers(2, DrawBuffers);
First render the transparent obj's depth. Then in your fragment shader for other objects
//compute color with your lighting...
//write color to colortexture
gl_FragData[0] = color;
//check if fragment behind your transparent object
if( depth >= tObjDepth )
{
//write color to distortcolortexture
gl_FragData[1] = color;
}
finally use the distortcolortexture for your distort shader.
Depth test for a matrix of pixels instead of single pixel.
I think the edge is leaking because maybe you don't simply distort one pixel but more of a matrix of pixels, perhaps you could also try checking the max depth for the matrix (eg: 3x3 pixels centered on current pixel) and discard it if it fails the depth test. (note : this still won't distort objects behind the occluding object which you might want distorted in).
I need to flip my texture vertically when copying it into another texture.I know about 3 simple ways to do it:
1 . Blit from once FBO into another using full screen quad (and flip in frag shader)
2 . Blit using glBlitFrameBuffer.
3 . Using glCopyImageSubData
I need to perform this copy between 2 textures which aren't attached to any FBO so I am trying to avoid first 2 solutions.I am trying the third one.
Doing it like this:
glCopyImageSubData(srcTex ,GL_TEXTURE_2D,0,0,0,0,targetTex,GL_TEXTURE_2D,0,0,width ,0,height,0,1);
It doesn't work.The copy returns garbage.Is this method supposed to be able to flip when reading?Is there an alternative FBO unrelated method(GPU side only)?
Btw:
glCopyTexSubImage2D(GL_TEXTURE_2D,0,0,0,0,height ,width,0 );
Doesn't work too.
Rendering a textured quad to a pbo by drawing the inverted quad would work.
Or you could go with a simple fragment shader doing a imageLoad + imageStore by inverting the y coordinate with 2 bound image buffers.
glBindImageTexture(0, copyFrom, 0, GL_FALSE, 0, GL_READ_ONLY, GL_RGBAUI32);
glBindImageTexture(1, copyTo, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_RGBAUI32);
the shader would look something like:
layout(binding = 0, rbga32ui) uniform uimage2d input_buffer;
layout(binding = 1, rbga32ui) uniform uimage2d output_buffer;
uniform float u_texHeight;
void main(void)
{
vec4 color = imageLoad( input_buffer, ivec2(gl_FragCoord.xy) );
imageStore( output_buffer, ivec2(gl_FragCoord.x,u_texHeight-gl_FragCoord.y-1), color );
}
You'll have to tweak it a little, but I know it works I used it before.
Hope this helps
I need to display some indexed graphic file, that additionally has per-pixel alpha channel. Also, I need to make sure that I can change the palette at any time and the resulting image will also change. For this, I first used software pixel precomputing, but that was just too slow for realtime rendering, so I decided to write a shader that will handle indexed textures on GPU-side. The problem is that the second texture (rec_colors) doesn't load (at least it seems like so — every texel read from that sampler appears completely empty).
Data from zero texture reads correctly, resulting in black image with right alpha :)
Shader-initializing-related code:
Application::Display->GetRC();
glewInit();
if(!GLEW_VERSION_2_0) return false;
char* code_frag = loadCode("shader.frag");
char* code_verx = loadCode("shader.verx");
aShader_palette = glCreateShader(GL_FRAGMENT_SHADER);
//glShaderSource(aShader_palette, 1, &aShaderProgram_palette, NULL);
glShaderSource(aShader_palette, 1, (const GLchar**)&code_frag, NULL);
glCompileShader(aShader_palette);
GLint compiled = 0;
glGetShaderiv(aShader_palette, GL_COMPILE_STATUS, &compiled);
if(!compiled)
{
/* error-handling */
}
GLuint texloc = glGetUniformLocation(aShader_palette, "rec");
glUniform1i(texloc, 0);
texloc = glGetUniformLocation(aShader_palette, "rec_colors");
glUniform1i(texloc, 1);
glsl_palette_Program = glCreateProgram();
glAttachShader(glsl_palette_Program, aShader_palette);
glLinkProgram(glsl_palette_Program);
And rendering-related:
glPushAttrib(GL_CURRENT_BIT);
glColor4ub(255, 255, 255, t_a); // t_a is overall alpha of sprite displayed
glUseProgram(glsl_palette_Program); // this one is a compiled/linked shader declared above
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, this->m_SpriteData[idx].texture);
glActiveTexture(GL_TEXTURE1); // at this point, it looks like texture unit is actually changed (I checked that via glGetIntegerv)
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, this->m_PaletteTex);
glTexSubImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, 0, 0, 256, 1, GL_RGBA, GL_UNSIGNED_BYTE, palette); // update possibly changed palette on each render
glActiveTexture(GL_TEXTURE0);
glBegin(GL_QUADS);
glTexCoord2i(0, 0);
glVertex2i(x, y);
glTexCoord2i(0, this->GetHeight(idx));
glVertex2i(x, y+this->GetHeight(idx));
glTexCoord2i(this->GetWidth(idx), this->GetHeight(idx));
glVertex2i(x+this->GetWidth(idx), y+this->GetHeight(idx));
glTexCoord2i(this->GetWidth(idx), 0);
glVertex2i(x+this->GetWidth(idx), y);
glEnd();
glActiveTexture(GL_TEXTURE1);
glUnbindTexture(GL_TEXTURE_RECTANGLE_ARB); // custom macro
glActiveTexture(GL_TEXTURE0);
glUnbindTexture(GL_TEXTURE_RECTANGLE_ARB);
glUseProgram(0);
glPopAttrib();
Shader code:
#extension GL_ARB_texture_rectangle : enable
uniform sampler2DRect rec;
uniform sampler2DRect rec_colors;
void main(void)
{
vec4 oldcol = texture2DRect(rec, gl_TexCoord[0].st);
vec4 newcol = texture2DRect(rec_colors, vec2(oldcol.r*255.0, 0.0)); // palette index should be*255 bcs rectangle coordinates aren't normalized
gl_FragColor.rgb = newcol.rgb;
gl_FragColor.a = oldcol.g; // alpha from green part
}
Googled a lot, any similar posts I found were solved by fixing texture unit IDs in glUniform1i call, but for me that looks absolutely normal (at least, TEXTURE0 loads correctly into rec).
Do you check for errors anywhere with glGetError? I belive you're doing something incorrectly. glGetUniformLocation is supposed to be executed against a linked program, not a shader. You're calling glGetUniformLocation before your program is linked.
See relevant text from man page: http://www.opengl.org/wiki/GLAPI/glGetUniformLocation
The actual locations assigned to uniform variables are not
known until the program object is linked successfully. After
linking has occurred, the command
glGetUniformLocation can be used to obtain
the location of a uniform variable. Uniform variable locations and values can only
be queried after a link if the link was successful.
You should always at the very least check for opengl errors with glGetError once per frame during development. It will alert you to these problems before you have to go online to ask for help.
I've been banging my head against this for hours now, I'm sure it's something simple, but I just can't get a result. I've had to edit this code down a bit because I've built a little library to encapsulate the OpenGL calls, but the following is an accurate description of the state of affairs.
I'm using the following vertex shader:
#version 330
in vec4 position;
in vec2 uv;
out vec2 varying_uv;
void main(void)
{
gl_Position = position;
varying_uv = uv;
}
And the following fragment shader:
#version 330
in vec2 varying_uv;
uniform sampler2D base_texture;
out vec4 fragment_colour;
void main(void)
{
fragment_colour = texture2D(base_texture, varying_uv);
}
Both shaders compile and the program links without issue.
In my init section, I load a single texture like so:
// Check for errors.
kt::kits::open_gl::Core<QString>::throw_on_error();
// Load an image.
QImage image("G:/test_image.png");
image = image.convertToFormat(QImage::Format_RGB888);
if(!image.isNull())
{
// Load up a single texture.
glGenTextures(1, &Texture);
glBindTexture(GL_TEXTURE_2D, Texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, image.width(), image.height(), 0, GL_RGB, GL_UNSIGNED_BYTE, image.constBits());
glBindTexture(GL_TEXTURE_2D, 0);
}
// Check for errors.
kt::kits::open_gl::Core<QString>::throw_on_error();
You'll observe that I'm using Qt to load the texture. The calls to ::throw_on_error() check for errors in OpenGL (by calling Error()), and throw an exception if one occurs. No OpenGL errors occur in this code, and the image loaded using Qt is valid.
Drawing is performed as follows:
// Clear previous.
glClear(GL_COLOR_BUFFER_BIT |
GL_DEPTH_BUFFER_BIT |
GL_STENCIL_BUFFER_BIT);
// Use our program.
glUseProgram(GLProgram);
// Bind the vertex array.
glBindVertexArray(GLVertexArray);
/* ------------------ Setting active texture here ------------------- */
// Tell the shader which textures are which.
kt::kits::open_gl::gl_int tAddr = glGetUniformLocation(GLProgram, "base_texture");
glUniform1i(tAddr, 0);
// Activate the texture Texture(0) as texture 0.
glActiveTexture(GL_TEXTURE0 + 0);
glBindTexture(GL_TEXTURE_2D, Texture);
/* ------------------------------------------------------------------ */
// Draw vertex array as triangles.
glDrawArrays(GL_TRIANGLES, 0, 4);
glBindVertexArray(0);
glUseProgram(0);
// Detect errors.
kt::kits::open_gl::Core<QString>::throw_on_error();
Similarly, no OpenGL errors occur, and a triangle is drawn to screeen. However, it looks like this:
It occurred to me the problem may be related to my texture coordinates. So, I rendered the following image using s as the 'red' component, and t as the 'green' component:
The texture coordinates appear correct, yet I'm still receiving the black triangle of doom. What am I doing wrong?
I think it could be depending on an incomplete init of your texture object.
Try to init the texture MIN and MAG filter
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
Moreover, I would suggest to check the size of the texture. If it is not power of 2, then you have to set the wrapping mode to CLAMP_TO_EDGE
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
Black textures are often due to this issue, very common problem around.
Ciao
In your fragment shader you're writing to a self defined target
fragment_colour = texture2D(base_texture, varying_uv);
If that's not to be gl_FragColor or gl_FragData[…], did you properly set the designated fragment data location?
I'm trying to render colored text to the screen. I've got a texture containing a black (RGBA 0, 0, 0, 255) representation of the text to display, and I've got another texture containing the color pattern I want to render the text in. This should be a fairly simple multitexturing exercise, but I can't seem to get the second texture to work. Both textures are Rectangle textures, because the integer coordinate values are easier to work with.
Rendering code:
glActiveTextureARB(GL_TEXTURE0_ARB);
glEnable(GL_TEXTURE_RECTANGLE_ARB);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, TextHandle);
glActiveTextureARB(GL_TEXTURE1_ARB);
glEnable(GL_TEXTURE_RECTANGLE_ARB);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, ColorsHandle);
glBegin(GL_QUADS);
glMultiTexCoord2iARB(GL_TEXTURE0_ARB, 0, 0);
glMultiTexCoord2iARB(GL_TEXTURE1_ARB, colorRect.Left, colorRect.Top);
glVertex2f(x, y);
glMultiTexCoord2iARB(GL_TEXTURE0_ARB, 0, textRect.Height);
glMultiTexCoord2iARB(GL_TEXTURE1_ARB, colorRect.Left, colorRect.Top + colorRect.Height);
glVertex2f(x, y + textRect.Height);
glMultiTexCoord2iARB(GL_TEXTURE0_ARB, textRect.Width, textRect.Height);
glMultiTexCoord2iARB(GL_TEXTURE1_ARB, colorRect.Left + colorRect.Width, colorRect.Top + colorRect.Height);
glVertex2f(x + textRect.Width, y + textRect.Height);
glMultiTexCoord2iARB(GL_TEXTURE0_ARB, textRect.Width, 0);
glMultiTexCoord2iARB(GL_TEXTURE1_ARB, colorRect.Left + colorRect.Width, colorRect.Top);
glVertex2f(x + textRect.Width, y);
glEnd;
Vertex shader:
void main()
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_TexCoord[1] = gl_MultiTexCoord1;
}
Fragment shader:
uniform sampler2DRect texAlpha;
uniform sampler2DRect texRGB;
void main()
{
float alpha = texture2DRect(texAlpha, gl_TexCoord[0].st).a;
vec3 rgb = texture2DRect(texRGB, gl_TexCoord[1].st).rgb;
gl_FragColor = vec4(rgb, alpha);
}
This seems really straightforward, but it ends up rendering solid black text instead of colored text. I get the exact same result if the last line of the fragment shader reads gl_FragColor = texture2DRect(texAlpha, gl_TexCoord[0].st);. Changing the last line to gl_FragColor = texture2DRect(texRGB, gl_TexCoord[1].st); causes it to render nothing at all.
Based on this, it appears that calling texture2DRect on texRGB always returns (0, 0, 0, 0). I've made sure that GL_MULTISAMPLE is enabled, and bound the texture on unit 1, but for whatever reason I don't seem to actually get access to it inside my fragment shader. What am I doing wrong?
The overalls look fine. It is possible that your texcoords for unit 1 are messed up, causing sampling outside the colored portion of your texture.
Is your color texture fully filled with color ?
What do you mean by "causes it to render nothing at all." ? This should not happen except if your alpha channel in color texture is set to 0.
Did you try with the following code, to override the alpha channel ?
gl_FragColor = vec4( texture2DRect(texRGB, gl_TexCoord[1].st).rgb, 1.0 );
Are you sure the the font outline texture contains a valid alpha values? You said that the texture is black and white, but you are using the alpha value! Instead of using the a component, try to use the r one.
Blending affects fragment shader output: it blends ths fragment color with the corresponding one.