Remove/ make transparent the discarded area in openGL - opengl

When I remove some bottom portion of my image using discard of opengl, that discarded area is showing with the color I specified in clearcolor. How can i make this clearcolor transparent, or is there any other way to hide those discarded portion?
I'm using below code when texture loads.
GL.ClearColor(new Color4(0f, 0f, 0f, 1f));
GL.Enable(EnableCap.DepthTest);
TexUtil.InitTexturing();
GL.Hint(HintTarget.PerspectiveCorrectionHint, HintMode.Nicest);
GL.DepthFunc(DepthFunction.Lequal);
GL.ColorMaterial(MaterialFace.FrontAndBack, ColorMaterialParameter.AmbientAndDiffuse);
GL.Enable(EnableCap.ColorMaterial);
GL.Enable(EnableCap.Blend);
GL.BlendFunc(BlendingFactor.SrcAlpha, BlendingFactor.OneMinusSrcAlpha);
GL.Ext.BindFramebuffer(FramebufferTarget.FramebufferExt, 0); // render per default onto screen, not some FBO
Fragement code :
if( vTexCoord.y>1.0-sVerticalCropVal )
{discard;
}
vec4 color = texture2D (sTexture_2, vec2(vTexCoord.x, vTexCoord.y));
gl_FragColor =color;

Related

LibGDX Sprite is drawn white (but transparency is working)

I am working on a game project using LibGDX. Now I'm facing a problem I can't understand; let me explain what I've got before coming to the actual problem:
Static background C (tiled)
Moving background B (tiled)
Moving background A (tiled)
Characters, entities
Walls (tiled)
HUD
The render loop goes as follows:
Render 2 to 5 in previous list to a transparent FBO
Render the lightmap to another FBO (using box2dLights)
Blend both products together using a custom shader
Draw the static background
Draw the blended texture from step 3
Draw the HUD
The problem I'm facing is that I obtain a white sprite when rendering background B (not a white block; it is more like a rgba(1, 1, 1, x). Then the background B is blended correctly with the lightmap (render step 3), but I cannot get its real color, just white:
The background B should appear dark blue and with a texture; it appears as a white sprite (blended with the lightmap). Behind background A, at the left of the picture (and also where the "SHADOW" text is), you can still see background B in a purplish tone, result of the blending with the light map, but please note it has no texture and it seems to me like if the texture is just plain white + alpha.
The render code:
#Override
public void render(float delta) {
Gdx.gl.glClearColor(0f, 0f, 0f, 1f);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
/* 1. Update Physics and lights */
if (worldFixedStep(delta)) {
rayHandler.update();
}
updateCameras();
cameraMatrixCopy.set(camera.combined);
rayHandler.setCombinedMatrix(cameraMatrixCopy.scale(Globals.BOX_TO_WORLD, Globals.BOX_TO_WORLD, 1.0f), camera.position.x,
camera.position.y, camera.viewportWidth * camera.zoom * Globals.BOX_TO_WORLD,
camera.viewportHeight * camera.zoom * Globals.BOX_TO_WORLD);
rayHandler.render();
final Texture lightMap = rayHandler.getLightMapTexture();
fbo.begin();
{
Gdx.gl.glClearColor(0f, 0f, 0f, 0f);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
/* 2. Draw the backgrounds */
batch.enableBlending();
batch.setBlendFunction(GL20.GL_ONE, GL20.GL_ONE_MINUS_SRC_ALPHA);
batch.setProjectionMatrix(bgCamera2.combined); //Background B
batch.begin();
bTileMapRenderer.setView(bgCamera2);
bTileMapRenderer.renderTileLayer(tilesBg2Layer);
batch.end();
batch.setProjectionMatrix(bgCamera1.combined); //Background A
batch.begin();
bTileMapRenderer.setView(bgCamera1);
bTileMapRenderer.renderTileLayer(tilesBg1Layer);
batch.end();
batch.setBlendFunction(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA); //FIXME this is where to touch to avoid gray alpha'ed borders.
batch.setProjectionMatrix(camera.combined);
batch.begin();
(...) //Draw everything else that needs to be blended with the light map
batch.end();
}
fbo.end();
/* 3. Render the static background (background C) */
batch.setProjectionMatrix(bgCameraStatic.combined);
batch.disableBlending();
batch.begin();
bTileMapRenderer.setView(bgCameraStatic);
bTileMapRenderer.renderTileLayer(tilesBg3Layer);
batch.end();
/* 4. Blend the frame buffer's texture with the light map in a fancy way */
Gdx.gl20.glActiveTexture(GL20.GL_TEXTURE0);
fboRegion.getTexture().bind();
Gdx.gl20.glActiveTexture(GL20.GL_TEXTURE1);
lightMap.bind();
Gdx.gl20.glEnable(Gdx.gl20.GL_BLEND);
Gdx.gl20.glBlendFunc(Gdx.gl20.GL_SRC_ALPHA, Gdx.gl20.GL_ONE_MINUS_SRC_ALPHA);
lightShader.begin();
lightShader.setUniformf("ambient_color", bgColor[0], bgColor[1], bgColor[2]);
lightShader.setUniformi("u_texture0", 0);
lightShader.setUniformi("u_texture1", 1);
fullScreenQuad.render(lightShader, GL20.GL_TRIANGLE_FAN, 0, 4);
lightShader.end();
Gdx.gl20.glDisable(Gdx.gl20.GL_BLEND);
hud.draw();
}
I just can't understand why this background is drawn white but still with its alpha data. The image is a premultiplied alpha texture, but again I cannot see why could this affect the color rendering.
Any help would be much appreciated.
Cheers

Why does a black source color with glBlendFunc(GL_SRC_ALPHA, GL_ONE); still cause a change in the destination color?

I am trying to develop a particle system in C++ using OpenGL, and I am confused about how blending works. I am trying to use additive blending, and from my understanding, calling the glBlendFunc function with the following arguments:
glBlendFunc(GL_SRC_ALPHA, GL_ONE);
Will cause the following when you try to render something: It will take the R,G,B color calculated by the fragment shader (source color), multiply that by the alpha value calculated by the fragment shader (source alpha), and then add that to the R,G,B color that already exists in the frame buffer (dest color). But if this is true, then a black color, where (R,G,B,A) = (0,0,0,1), calculated by the fragment shader should leave the existing frame buffer color (dest color) unchanged since it is multiplying the source color value of 0 by the source alpha value of 1, which should obviously always yield 0, and then that 0 is added to the existing frame buffer color (dest color) which should leave it unchanged...right?
However, when I do this, instead of leaving the color unchanged, it actually makes it lighter, as shown here:
In this picture, the environment and sword are rendered with normal blending, and then the square particles are rendered around the sword with glBlendFunc(GL_SRC_ALPHA, GL_ONE); using a fragment shader that ALWAYS outputs (R,G,B,A) = (0,0,0,1). There are many particles rendered, and you can see that as more particles overlap, it gets brighter. When I switch the alpha output of the shader from 1 to 0 (source alpha), then the particles disappear, which makes sense. But why are they still visible when source color=1 and source alpha=0?
Here is the exact function I call to render the particles:
void ParticleController::Render()
{
GraphicsController* gc = m_game->GetGraphicsController();
ShaderController* sc = gc->GetShaderController();
gc->BindAttributeBuffer(m_glBuffer);
ShaderProgram* activeShaderProgram = sc->UseProgram(SHADER_PARTICLE);
glDepthMask(false);
glBlendFunc(GL_SRC_ALPHA, GL_ONE);
gc->GetTextureController()->BindGLTexture(TEX_PARTICLES);
activeShaderProgram->SetUniform(SU_PROJ_VIEW_MAT, gc->GetProjectionMatrix() * gc->GetViewMatrix() * gc->GetWorldScaleMatrix());
activeShaderProgram->SetUniform(SU_LIGHT_AMBIENT, m_game->GetWorld()->GetAmbientLight());
activeShaderProgram->SetUniform(SU_TIME, m_game->GetWorld()->GetWorldTime());
CheckForOpenGLError();
m_pa.GLDraw();
CheckForOpenGLError();
gc->BindAttributeBuffer_Default();
CheckForOpenGLError();
glDepthMask(true);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
}
And these are the last 4 lines of my fragment shader, ensuring that the particle color output (R,G,B,A) is always = (0,0,0,1):
fColor.r = 0.0;
fColor.g = 0.0;
fColor.b = 0.0;
fColor.a = 1.0;
Is there something I am missing?

HLSL - Sampling a render target texture always return black color

Okay, first of all, I'm really new to DirectX11 and this is actually my first project using it. I'm also relatively new to Computer Graphics in general so I might have some concepts wrong although, for this particular case, I do not think so. My code is based on the RasterTek tutorials.
In trying to implement a shader shader, I need to render the scene to a 2D texture and then perform a gaussian blur on the resulting image.
That part seems to be working fine as when using the Visual Studio graphics debugger the output seems to be what I expect.
However, after having having done all post processing, I render a quad to the backbuffer using a simple shader that uses the final output of the blur as a resource. This always gives me a black screen. When I debug my pixel shader with the VS graphics debugger, it seem like the Sample(texture, uv) method always returns (0,0,0,1) when trying to sample that texture.
The pixel shader works fine if I use a different texture, like some normal map or whatever, as a resource, just not when using any of the rendertargets from the previous passes.
The behaviour is particularly weird because the actual blur shader works fine when using any of the rendertargets as a resource.
I know I cannot use a rendertarget as both input and output but I think I have that covered since I call OMSetRenderTargets so I can render to the backbuffer.
Here's the step by step of my implementation:
Set Render Targets
Clear them
Clear Depth buffer
Render scene to texture
Turn off Z buffer
Render to quad
Perform horizontal blur
Perform vertical blur
Set back buffer as render target
Clear back buffer
Render final output to quad
Turn z buffer on
Present back buffer
Here is the shader for the quad:
Texture2D shaderTexture : register(t0);
SamplerState SampleType : register(s0);
struct PixelInputType
{
float4 position : SV_POSITION;
float2 tex : TEXCOORD0;
};
float4 main(PixelInputType input) : SV_TARGET
{
return shaderTexture.Sample(SampleType, input.tex);
}
Here's the relevant c++ code
This is how I set the render targets
void DeferredBuffers::SetRenderTargets(ID3D11DeviceContext* deviceContext, bool activeRTs[BUFFER_COUNT]){
vector<ID3D11RenderTargetView*> rts = vector<ID3D11RenderTargetView*>();
for (int i = 0; i < BUFFER_COUNT; ++i){
if (activeRTs[i]){
rts.push_back(m_renderTargetViewArray[i]);
}
}
deviceContext->OMSetRenderTargets(rts.size(), &rts[0], m_depthStencilView);
// Set the viewport.
deviceContext->RSSetViewports(1, &m_viewport);
}
I use a ping pong approach with the Render Targets for the blur.
I render the scene to a MainTarget and depth information to the depthMap. The first pass performs an horizontal blur onto a third target (horizontalBlurred) and then I use that one as input for the vertical blur which renders back to the mainTarget and to the finalTarget. It's a loop because on the vertical pass I'm supposed to blend the PS output with what's on the finalTarget. I left that code (and some other stuff) out as it's not relevant.
The m_Fullscreen is the quad.
bool activeRenderTargets[4] = { true, true, false, false };
// Set the render buffers to be the render target.
m_ShaderManager->getDeferredBuffers()->SetRenderTargets(m_D3D->GetDeviceContext(), activeRenderTargets);
// Clear the render buffers.
m_ShaderManager->getDeferredBuffers()->ClearRenderTargets(m_D3D->GetDeviceContext(), 0.25f, 0.0f, 0.0f, 1.0f);
m_ShaderManager->getDeferredBuffers()->ClearDepthStencil(m_D3D->GetDeviceContext());
// Render the scene to the render buffers.
RenderSceneToTexture();
// Get the matrices.
m_D3D->GetWorldMatrix(worldMatrix);
m_Camera->GetBaseViewMatrix(baseViewMatrix);
m_D3D->GetOrthoMatrix(projectionMatrix);
// Turn off the Z buffer to begin all 2D rendering.
m_D3D->TurnZBufferOff();
// Put the full screen ortho window vertex and index buffers on the graphics pipeline to prepare them for drawing.
m_FullScreenWindow->Render(m_D3D->GetDeviceContext());
ID3D11ShaderResourceView* mainTarget = m_ShaderManager->getDeferredBuffers()->GetShaderResourceView(0);
ID3D11ShaderResourceView* horizontalBlurred = m_ShaderManager->getDeferredBuffers()->GetShaderResourceView(2);
ID3D11ShaderResourceView* depthMap = m_ShaderManager->getDeferredBuffers()->GetShaderResourceView(1);
ID3D11ShaderResourceView* finalTarget = m_ShaderManager->getDeferredBuffers()->GetShaderResourceView(3);
activeRenderTargets[1] = false; //depth map is never a render target again
for (int i = 0; i < numBlurs; ++i){
activeRenderTargets[0] = false; //main target is resource in this pass
activeRenderTargets[2] = true; //horizontal blurred target
activeRenderTargets[3] = false; //unbind final target
m_ShaderManager->getDeferredBuffers()->SetRenderTargets(m_D3D->GetDeviceContext(), activeRenderTargets);
m_ShaderManager->RenderScreenSpaceSSS_HorizontalBlur(m_D3D->GetDeviceContext(), m_FullScreenWindow->GetIndexCount(), worldMatrix, baseViewMatrix, projectionMatrix, mainTarget, depthMap);
activeRenderTargets[0] = true; //rendering to main target
activeRenderTargets[2] = false; //horizontal blurred is resource
activeRenderTargets[3] = true; //rendering to final target
m_ShaderManager->getDeferredBuffers()->SetRenderTargets(m_D3D->GetDeviceContext(), activeRenderTargets);
m_ShaderManager->RenderScreenSpaceSSS_VerticalBlur(m_D3D->GetDeviceContext(), m_FullScreenWindow->GetIndexCount(), worldMatrix, baseViewMatrix, projectionMatrix, horizontalBlurred, depthMap);
}
m_D3D->SetBackBufferRenderTarget();
m_D3D->BeginScene(0.0f, 0.0f, 0.5f, 1.0f);
// Reset the viewport back to the original.
m_D3D->ResetViewport();
m_ShaderManager->RenderTextureShader(m_D3D->GetDeviceContext(), m_FullScreenWindow->GetIndexCount(), worldMatrix, baseViewMatrix, projectionMatrix, depthMap);
m_D3D->TurnZBufferOn();
m_D3D->EndScene();
And, finally, here are 3 screenshots from my graphics log.
They show rendering the scene onto the mainTarget, a verticalPass which takes as input the horizontalBlurred resource and finally, rendering onto the backBuffer, which is what's failing. You can see the resource bound to the shader and how the output is just a black screen. I purposedly set the background as red to find out if it was sampling with wrong coordinates, but nope.
So, has anyone ever experienced something like this? What could be the cause of this bug?
Thanks in advance for any help!
EDIT: The Render_SOMETHING_SOMETHING_shader methods handle binding all the resources, setting the shaders, draw calls etc etc. If necessary I can post them here, but I don't think it's that relevant.

Creating and blending a dynamic texture in OpenGL

I need to render a sphere to a texture (done using a Framebuffer Object (FBO)), and then alpha blend that texture with the back buffer. So far I'm not doing any processing with the texture except clearing it at the beginning of every frame.
I should say that my scene consists of nothing but a planet in empty space, the sphere should appear next to or around the planet (kind of like a moon for now). When I render the sphere directly to the back buffer, it displays correctly; but when I do the intermediary step of rendering it to a texture and then blending that texture with the back buffer, the sphere only shows up when it is in front of the planet, the part that isn't in front is just "cut off":
I render the sphere using glutSolidSphere to a RGBA8 fullscreen texture that's bound to an FBO, making sure that every sphere pixel receives an alpha value of 1.0. I then pass the texture to a fragment shader program, and use this code to render a fullscreen quad - with the texture mapped onto it - to the backbuffer while alpha blending:
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
glDisable(GL_DEPTH_TEST);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glBegin(GL_QUADS);
glTexCoord2i(0, 1);
glVertex3i(-1, 1, -1); // TOP LEFT
glTexCoord2i(0, 0);
glVertex3i(-1, -1, -1); // BOTTOM LEFT
glTexCoord2i(1, 0);
glVertex3i( 1, -1, -1); // BOTTOM RIGHT
glTexCoord2i(1, 1);
glVertex3i( 1, 1, -1); // TOP RIGHT
glEnd();
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
glEnable(GL_DEPTH_TEST);
glDisable(GL_BLEND);
This is the shader code (taken from an FX file written in Cg):
sampler2D BlitSamp = sampler_state
{
MinFilter = LINEAR;
MagFilter = LINEAR;
MipFilter = LINEAR;
AddressU = Clamp;
AddressV = Clamp;
};
float4 blendPS(float2 texcoords : TEXCOORD0) : COLOR
{
float4 outColor = tex2D(BlitSamp, texcoords);
return outColor;
}
I don't even know whether this is a problem with the depth buffer or with alpha blending, I've tried a lot of combinations of enabling and disabling depth testing (with a depth buffer attached to the FBO) and alpha blending.
EDIT: I tried just rendering a blank fullscreen quad straight to the back buffer and even that was cropped around the planet's edges. For some reason, enabling depth testing for rendering the quad (that is, removing the lines glDisable(GL_DEPTH_TEST) and glEnable(GL_DEPTH_TEST) in the code above) got rid of the problem, but now everything but the planet and the sphere appears white:
I made sure (and could confirm) that the alpha channel of the texture is 0 at every pixel but the sphere's, so I don't understand where the whiteness could be introduced. (Would also still be interested in an explanation why enabling depth testing has this effect.)
I see two possible sources of error here:
1. Rendering to the FBO
If the missing pixels are not even present in the FBO after rendering, there must be some mechanism which discarded the corresponding fragments. The OpenGL pipeline includes four different types of fragment tests which can lead to fragments being discarded:
Scissor Test: Unlikely to be the cause, as the scissor test only affects a rectangular portion of the screen.
Alpha Test: Equally unlikely, as your fragments should all have the same alpha value.
Stencil Test: Also unlikely, unless you use stencil operations when drawing the background planet and copy over the stencil buffer from the back buffer to the FBO.
Depth Test: Same as for stencil test.
So there's a good chance that rendering into FBO is not the issue here. But just to be absolutely sure, you should read back your color attachment texture and dump it into a file for inspection. You can use the following function for that:
void TextureToFile(GLuint texture, const char* filename) {
glBindTexture(GL_TEXTURE_2D, texture);
GLint width, height;
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, &width);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_HEIGHT, &height);
std::vector<GLubyte> pixels(3 * width * height);
glGetTexImage(GL_TEXTURE_2D, 0, GL_RGB, GL_UNSIGNED_BYTE, &pixels[0]);
std::ofstream out(filename, std::ios::out | std::ios::binary);
out << "P6\n"
<< width << '\n'
<< height << '\n'
<< 255 << '\n';
out.write(reinterpret_cast<const char*>(&pixels[0]), pixels.size());
}
The resulting file is a portable pixmap (.ppm). Be sure to unbind the FBO before reading back the texture.
2. Texture mapping
Assuming rendering into the FBO works as expected, the only other source of error is blending the texture over the previously rendered scene. There are two scenarios:
a) Fragments get discarded
The possible reasons for fragments to get discarded are the same as in 1.:
Scissor Test: Nope, affects rectangular areas only.
Alpha Test: Probably not, the texels covered sphere should all have the same alpha value.
Stencil Test: Might be the cause if you use stencil operations/stencil testing when drawing the background planet and the old stencil state is still active.
Depth Test: Might be the cause, but as you already disable it, it really shouldn't have any effect.
So you should make sure that all of these tests are disabled, especially the stencil test.
b) Wrong results from blending
Assuming all fragments reach the back buffer, blending is the only thing which could still cause the wrong result. With your blending function (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) the values in the back buffer are irrelevant for blending, and we assume that the alpha values in the texture are correct. So I see no reason for why blending should be the root cause here.
Conclusion
In conclusion, the only sensible cause for the observed result seems to be stencil testing. If it's not, I'm out of options :)
I solved it or at least came up with a work around.
First off, the whiteness stems from the fact that glClearColor had been set to glClearColor(1.0f, 1.0f, 1.0f, 1000.0f), so everything but the planet wasn't even written to in the end. I now copy the contents of the back buffer (which is the planet, the atmosphere, and the space around it) to the texture before rendering the sphere, and I render the atmosphere and space before that copy/blit operation, so they are included in it. Previously, everything but the planet itself was rendered after my quad, which - when using depth testing - apparently placed everything behind the quad, making it invisible.
The reference implementation of the effect I'm trying to achieve has always used this kind of blit operation in its code but I didn't think it was necessary for the effect. Now I feel like there might be no other way...

How do I get my textures to bind properly for multitexturing?

I'm trying to render colored text to the screen. I've got a texture containing a black (RGBA 0, 0, 0, 255) representation of the text to display, and I've got another texture containing the color pattern I want to render the text in. This should be a fairly simple multitexturing exercise, but I can't seem to get the second texture to work. Both textures are Rectangle textures, because the integer coordinate values are easier to work with.
Rendering code:
glActiveTextureARB(GL_TEXTURE0_ARB);
glEnable(GL_TEXTURE_RECTANGLE_ARB);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, TextHandle);
glActiveTextureARB(GL_TEXTURE1_ARB);
glEnable(GL_TEXTURE_RECTANGLE_ARB);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, ColorsHandle);
glBegin(GL_QUADS);
glMultiTexCoord2iARB(GL_TEXTURE0_ARB, 0, 0);
glMultiTexCoord2iARB(GL_TEXTURE1_ARB, colorRect.Left, colorRect.Top);
glVertex2f(x, y);
glMultiTexCoord2iARB(GL_TEXTURE0_ARB, 0, textRect.Height);
glMultiTexCoord2iARB(GL_TEXTURE1_ARB, colorRect.Left, colorRect.Top + colorRect.Height);
glVertex2f(x, y + textRect.Height);
glMultiTexCoord2iARB(GL_TEXTURE0_ARB, textRect.Width, textRect.Height);
glMultiTexCoord2iARB(GL_TEXTURE1_ARB, colorRect.Left + colorRect.Width, colorRect.Top + colorRect.Height);
glVertex2f(x + textRect.Width, y + textRect.Height);
glMultiTexCoord2iARB(GL_TEXTURE0_ARB, textRect.Width, 0);
glMultiTexCoord2iARB(GL_TEXTURE1_ARB, colorRect.Left + colorRect.Width, colorRect.Top);
glVertex2f(x + textRect.Width, y);
glEnd;
Vertex shader:
void main()
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_TexCoord[1] = gl_MultiTexCoord1;
}
Fragment shader:
uniform sampler2DRect texAlpha;
uniform sampler2DRect texRGB;
void main()
{
float alpha = texture2DRect(texAlpha, gl_TexCoord[0].st).a;
vec3 rgb = texture2DRect(texRGB, gl_TexCoord[1].st).rgb;
gl_FragColor = vec4(rgb, alpha);
}
This seems really straightforward, but it ends up rendering solid black text instead of colored text. I get the exact same result if the last line of the fragment shader reads gl_FragColor = texture2DRect(texAlpha, gl_TexCoord[0].st);. Changing the last line to gl_FragColor = texture2DRect(texRGB, gl_TexCoord[1].st); causes it to render nothing at all.
Based on this, it appears that calling texture2DRect on texRGB always returns (0, 0, 0, 0). I've made sure that GL_MULTISAMPLE is enabled, and bound the texture on unit 1, but for whatever reason I don't seem to actually get access to it inside my fragment shader. What am I doing wrong?
The overalls look fine. It is possible that your texcoords for unit 1 are messed up, causing sampling outside the colored portion of your texture.
Is your color texture fully filled with color ?
What do you mean by "causes it to render nothing at all." ? This should not happen except if your alpha channel in color texture is set to 0.
Did you try with the following code, to override the alpha channel ?
gl_FragColor = vec4( texture2DRect(texRGB, gl_TexCoord[1].st).rgb, 1.0 );
Are you sure the the font outline texture contains a valid alpha values? You said that the texture is black and white, but you are using the alpha value! Instead of using the a component, try to use the r one.
Blending affects fragment shader output: it blends ths fragment color with the corresponding one.