Using SDL_ttf and OpenGL, TTF_RenderUTF8_Blended print Red rectangle - c++

When I render my text using TTF_RenderUTF8_Blended I obtain a solid rectangle on the screen. The color depends on the one I choose, in my case the rectangle is red.
My question
What am I missing? It seems like I'm not getting the proper Alpha values from the surface generated with SDL_DisplayFormatAlpha(TTF_RenderUTF8_Blended( ... )), or am I? Does anyone recognize or know the problem?
Additionnal informations
If I use TTF_RenderUTF8_Solid or TTF_RenderUTF8_Shaded the text is drawn properly, but not blended of course.
I am also drawing other textures on the screen, so I draw the text last to ensure the blending will take into account the current surface.
Edit:SDL_Color g_textColor = {255, 0, 0, 0}; <-- I tried with and without the alpha value, but I get the same result.
I have tried to summarize the code without removing too much details. Variables prefixed with "g_" are global.
Init() function
// This function creates the required texture.
bool Init()
{
// ...
g_pFont = TTF_OpenFont("../arial.ttf", 12);
if(g_pFont == NULL)
return false;
// Write text to surface
g_pText = SDL_DisplayFormatAlpha(TTF_RenderUTF8_Blended(g_pFont, "My first Text!", g_textColor)); //< Doesn't work
// Note that Solid and Shaded Does work properly if I uncomment them.
//g_pText = SDL_DisplayFormatAlpha(TTF_RenderUTF8_Solid(g_pFont, "My first Text!", g_textColor));
//g_pText = SDL_DisplayFormatAlpha(TTF_RenderUTF8_Shaded(g_pFont, "My first Text!", g_textColor, g_bgColor));
if(g_pText == NULL)
return false;
// Prepare the texture for the font
GLenum textFormat;
if(g_pText->format->BytesPerPixel == 4)
{
// alpha
if(g_pText->format->Rmask == 0x000000ff)
textFormat = GL_RGBA;
else
textFormat = GL_BGRA_EXT;
}
// Create the font's texture
glGenTextures(1, &g_FontTextureId);
glBindTexture(GL_TEXTURE_2D, g_FontTextureId);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, g_pText->format->BytesPerPixel, g_pText->w, g_pText->h, 0, textFormat, GL_UNSIGNED_BYTE, g_pText->pixels);
// ...
}
DrawText() function
// this function is called each frame
void DrawText()
{
SDL_Rect sourceRect;
sourceRect.x = 0;
sourceRect.y = 0;
sourceRect.h = 10;
sourceRect.w = 173;
// DestRect is null so the rect is drawn at 0,0
SDL_BlitSurface(g_pText, &sourceRect, g_pSurfaceDisplay, NULL);
glBindTexture(GL_TEXTURE_2D, g_FontTextureId);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBegin( GL_QUADS );
glTexCoord2f(0.0f, 0.0f);
glVertex2f(0.0f, 0.0f);
glTexCoord2f(0.0f, 1.0f);
glVertex2f(0.0f, 10.0f);
glTexCoord2f(1.0f, 1.0f);
glVertex2f(173.0f, 10.0f);
glTexCoord2f(1.0f, 0.0f);
glVertex2f(173.0f, 0.0f);
glEnd();
glDisable(GL_BLEND);
glDisable(GL_TEXTURE_2D);
}

You've made a fairly common mistake. It's on the OpenGL end of things.
When you render the textured quad in DrawText(), you enable OpenGL's blending capability, but you never specify the blending function (i.e. how it should be blended)!
You need this code to enable regular alpha-blending in OpenGL:
glEnable( GL_BLEND );
glBlendFunc( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA );
This info used to be on the OpenGL website, but I can't find it now.
That should stop it from coming out solid red. The reasons the others worked is because they're not alpha-blended, they're actually just red-on-black images with no alpha, so the blending function doesn't matter. But the blended one only contains red color, with an alpha channel to make it less-red.
I notice a few other small problems in your program though.
In the DrawText() function, you are blitting the surface using SDL and rendering with OpenGL. You should not use regular SDL blitting when using OpenGL; it doesn't work. So this line should not be there:
SDL_BlitSurface(g_pText, &sourceRect, g_pSurfaceDisplay, NULL);
Also, this line leaks memory:
g_pText = SDL_DisplayFormatAlpha( TTF_RenderUTF8_Blended(...) );
TTF_RenderUTF8_Blended() returns a pointer to SDL_Surface, which must be freed with SDL_FreeSurface(). Since you're passing it into SDL_DisplayFormatAlpha(), you lose track of it, and it never gets freed (hence the memory leak).
The good news is that you don't need SDL_DisplayFormatAlpha here because TTF_RenderUTF8_Blended returns a 32-bit surface with an alpha-channel anyway! So you can rewrite this line as:
g_pText = TTF_RenderUTF8_Blended(g_pFont, "My first Text!", g_textColor);

Related

Draw an IOSurface to an OpenGL context

I have an OpenGL context on which I draw successfully using OpenGL.
I need to draw a specific rectangle of an IOSurface to this context.
What is the best way to do this on 10.8?
NOTE:
I know how to do this on 10.9 using CoreImage (by createing a CIImage from the IOSurface, and render it with [CIContext drawImage:inRect:fromRect]).
However, this does not work well for me on 10.8 (each raw of the image is displayed with a different offset, and the image is distorted diagonally).
Edit: Here is the code that works on 10.9 but not on 10.8:
CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceSRGB);
CIImage* ciImage = [[CIImage alloc] initWithIOSurface:surface plane:0 format:kCVPixelFormatType_32BGRA options:#{kCIImageColorSpace : (__bridge id)colorSpace}];
NSRect flippedFromRect = fromRect;
// Flip rect before passing to CoreImage:
{
flippedFromRect.origin.y = IOSurfaceGetHeight(surface) - fromRect.origin.y - fromRect.size.height;
}
[ciContext drawImage:ciImage inRect:inRect fromRect:flippedFromRect];
CGColorSpaceRelease(colorSpace);
Here is the solution by wrapping the IOSurface with an OpenGL texture and draw the texture to the screen. This assumes a similar API to [CIContext render:toIOSurface:bounds:colorSpace:] but a vertically flipped OpenGL coordinate system.
// Draw surface on OpenGL context
{
// Enable the rectangle texture extenstion
glEnable(GL_TEXTURE_RECTANGLE_EXT);
// 1. Create a texture from the IOSurface
GLuint name;
{
CGLContextObj cgl_ctx = ...
glGenTextures(1, &name);
GLsizei surface_w = (GLsizei)IOSurfaceGetWidth(surface);
GLsizei surface_h = (GLsizei)IOSurfaceGetHeight(surface);
glBindTexture(GL_TEXTURE_RECTANGLE_EXT, name);
CGLError cglError =
CGLTexImageIOSurface2D(cgl_ctx, GL_TEXTURE_RECTANGLE_EXT, GL_RGBA, surface_w, surface_h, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, surface, 0);
glBindTexture(GL_TEXTURE_RECTANGLE_EXT, 0);
}
// 2. Draw the texture to the current OpenGL context
{
glBindTexture(GL_TEXTURE_RECTANGLE_EXT, name);
glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glBegin(GL_QUADS);
glColor4f(0.f, 0.f, 1.0f, 1.0f);
glTexCoord2f( (float)NSMinX(fromRect), (float)(NSMinY(fromRect)));
glVertex2f( (float)NSMinX(inRect), (float)(NSMinY(inRect)));
glTexCoord2f( (float)NSMaxX(fromRect), (float)NSMinY(fromRect));
glVertex2f( (float)NSMaxX(inRect), (float)NSMinY(inRect));
glTexCoord2f( (float)NSMaxX(fromRect), (float)NSMaxY(fromRect));
glVertex2f( (float)NSMaxX(inRect), (float)NSMaxY(inRect));
glTexCoord2f( (float)NSMinX(fromRect), (float)NSMaxY(fromRect));
glVertex2f( (float)NSMinX(inRect), (float)NSMaxY(inRect));
glEnd();
glBindTexture(GL_TEXTURE_RECTANGLE_EXT, 0);
}
glDeleteTextures(1, &name);
}
If you need to draw in the display's color profile, you can explicitly call ColorSync and pass it your source profile and destination profile. It will return to you a “recipe” to perform the color correction. That recipe actually has a linearization, a color conversion (a 3x3 conversion matrix) and a gamma.
FragmentInfo = ColorSyncTransformCopyProperty (transform, kColorSyncTransformFullConversionData, NULL);
If you like, you can combine all those operations into a 3D lookup table. That's actually what happens in the color management of many of the OS X frameworks and applications.
References:
Apple TextureUpload sample code
Draw IOSurfaces to another IOSurface
OpenGL Options for Advanced Color Management

Mask part of a texture on draw with OpenGL in a fixed pipeline

I am trying to figure out the best way to mask of sections of a texture when they ar drawn. My issue comes in the fact that I seem to have run our of alpha masks!
We are using openGL to draw a custom built 2D game engine. The game is built up off of sprites and simple block textures.
My desired outcome is like this:
A character sprite is drawn in place (using it's alpha color to not just be a box)
An item is drawn into the players hand (also using it's alpha color to draw into the scene without being a box)
The item should appear behind the characters arm/hand, but above the rest of the body.
For the moment the only way I can figure out how to accomplish this, is by drawing them in order (Body, Item, Arm) but I would like to avoid this to make art assets a bit easier to deal with. My idea solution would be to draw the character, then draw the item with an alpha mask that blocks out areas of the texture that should be "under" the arm.
Other solutions that I have seen are like this, where the glBlendFuncSeparate() function is used. I am trying to avoid bringing in extensions, as my current version of OpenGL doesn't support it. Not to say that I am opposed to the idea, but it seems a bit of a handle to brig it in just to draw an alpha mask?
I fully admit that this is a learning process for me, and I am using it as an excuse to really see how OpenGL handles. Any suggestions as to where I should head to get this to draw correctly? Is there a way for OpenGL in the fixed pipeline to take a texture, apply an alpha mask on top of it, and THEN draw it into the buffer? Should I give in and separate my character into several parts of its model?
[UPDATE: 8/12/12]
Tried to add the code suggested by Tim, but I seem to be having an issue. When I enable the stencil buffer, everything just gets blocked out, NOT just what I wanted. Here is my test example code.
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
// Disable writing to any of the color fields
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
glStencilOp(GL_KEEP, GL_KEEP, GL_INCR);
glStencilFunc(GL_ALWAYS, 0,0);
// Draw our blocking poly
glBegin(GL_POLYGON);
glVertex2f( 50, 50 );
glVertex2f( 50, 50+128 );
glVertex2f( 50+128, 50+128 );
glEnd();
glStencilFunc(GL_GREATER, 0, -1);
glEnable(GL_STENCIL_TEST);
// Re enable drawing of colors
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
// Enable use of textures
glEnable(GL_TEXTURE_2D);
// Bind desired texture for drawing
glBindTexture(GL_TEXTURE_2D,(&texture)[0]);
// Draw the box with colors
glBegin(GL_QUADS);
glTexCoord2d( 0, 0 ); glVertex2f( 50, 50 );
glTexCoord2d( 0, 1 ); glVertex2f( 50, 50+128 );
glTexCoord2d( 1, 1 ); glVertex2f( 50+128, 50+128 );
glTexCoord2d( 1, 0 ); glVertex2f( 50+128, 50 );
glEnd();
// Swap buffers and display!
SDL_GL_SwapBuffers();
Just to be clear, here is my init code as well to set this system up.
When the code is run with stencil disabled, I get this:
When I use glEnable(GL_STENCIL_TEST), I get this:
I've tried playing around with various options, but I cannot see a clear reason why my stencil buffer is blocking everything.
[Update#2 8/12/12]
We got some working code, Thanks tim! Here is what I ended up running to work correctly.
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
// Disable writing to any of the color fields
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
glStencilOp(GL_INCR, GL_INCR, GL_INCR);
glEnable(GL_STENCIL_TEST);
// Draw our blocking poly
glBegin(GL_POLYGON);
glVertex2f( 50, 50 );
glVertex2f( 50, 50+128 );
glVertex2f( 50+128, 50+128 );
glEnd();
glStencilFunc(GL_EQUAL, 1, 1);
glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP);
// Re enable drawing of colors
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
// Enable use of textures
glEnable(GL_TEXTURE_2D);
// Bind desired texture for drawing
glBindTexture(GL_TEXTURE_2D,(&texture)[0]);
// Draw the box with colors
glBegin(GL_QUADS);
glTexCoord2d( 0, 0 ); glVertex2f( 50, 50 );
glTexCoord2d( 0, 1 ); glVertex2f( 50, 50+128 );
glTexCoord2d( 1, 1 ); glVertex2f( 50+128, 50+128 );
glTexCoord2d( 1, 0 ); glVertex2f( 50+128, 50 );
glEnd();
glDisable(GL_STENCIL_TEST);
// Swap buffers and display!
SDL_GL_SwapBuffers();
Here's my idea for the situation where you have one texture and one alpha mask:
Draw the character onto the scene like normal.
Lock the RGB color channels so that it cannot be changed with glColorMask
Setup the stencil buffer with glStencilOp(GL_KEEP, GL_KEEP, GL_INCR); glStencilFunc(GL_ALWAYS, 0,0);
Draw the alpha mask with alpha testing enabled. This will increment the stencil buffer anywhere the alpha test passes (you may have to flip this based on your mask polarity)
At this point, you have a character texture in the framebuffer, and a mask outline in the stencil buffer.
Reenable the color channels with glColorMask
Setup the stencil buffer for the weapon with glStencilFunc(GL_GREATER, 0, -1); This will only draw the weapon texels where the stencil buffer is greater than zero, and reject pixels where the stencil is not updated.
Draw the weapon texture as normal.
Tim was pretty clear in his comment, but I want to present you the solution I find the most intuitive. It's 3D, so hold on... ;)
Basically, you can just use the Z coordinate of your images to create virtual "layers". It then doesnt' matter, in which order you draw them. Just alphatest every image individually, and draw it on correct Z value. If it still isn't enough, you could use separate texture containing "depth" of every pixel, and then use the 2nd texture to perform some sort of depth-testing.
Be sure to call glEnable(GL_DEPTH_TEST); if you want to use this approach.
As I see it, the problem is that you have one texture, but part of it represents the arm and part of it the rest of the character. The issue is that you want to draw the weapon over the character, but draw the arm over both.
This means, while drawing two objects, you want to put them into three different "layers". This fundamentally doesn't make sense, so you're kind of stuck.
Here's an idea though: use a fragment program (i.e., a shader).
I suggest you overload the character's texture's alpha channel to encode both transparency and layer. For example, let's use 0=transparent body, 64=opaque body, 128=transparent arm, 255=opaque arm.
From here, you draw your objects, but conditionally set the depth of your objects into three layers. Basically, you write a fragment program that draws your character into two different layers, the character gets pushed backward while the arm gets pulled forward. When the weapon is drawn, it is drawn without a shader, but it's tested against the characters' pixels' depths. It works something like this (untested, obviously).
Define a shader my_shader, which contains a fragment program:
uniform sampler2D character_texture;
void main(void) {
vec4 sample = texture2D(character_texture,gl_TexCoord[0].st);
int type; //Figure out what type of character texel we're looking at
if (fabs(sample.a-0.00)<0.01) type = 0; //transparent body
else if (fabs(sample.a-0.25)<0.01) type = 1; //opaque body
else if (fabs(sample.a-0.50)<0.01) type = 2; //transparent arm
else if (fabs(sample.a-1.00)<0.01) type = 3; //opaque arm
//Don't draw transparent pixels.
if (type==0 || type==2) discard;
gl_FragColor = vec4(sample.rgb,1.0);
//Normally, you (can) write "gl_FragDepth = gl_FragCoord.z". This
//is how OpenGL will draw your weapon. However, for the character,
//we alter that so that the arm is closer and the body is farther.
//Move body farther
if (type==1) gl_FragDepth = gl_FragCoord.z * 1.1;
//Move arm closer
else if (type==3) gl_FragDepth = gl_FragCoord.z * 0.9;
}
Here's some pseudocode for your draw function:
//...
//Algorithm to draw your character
glUseProgram(my_shader);
glBindTexture(GL_TEXTURE_2D,character.texture.texture_gl_id);
glUniform1i(glGetUniformLocation(my_shader,"character_texture"),1);
character.draw();
glUseProgram(0);
//Draw your weapon
glEnable(GL_DEPTH_TEST);
character.weapon.draw();
glDisable(GL_DEPTH_TEST);
//...

OpenGL Texture transparency doesn't work

I'm having an OpenGL texture that is binded to a simple quad.
My problem is: My texture is 128x128 pixels image. I'm only filling up about 100x60 pixels on that image, the other pixels are transparent. I saved it in a .png file. When I'm drawing, the transparent part of the binded texture is white.
Let's say I have a background. When I draw this new quad on this background I can't see the through the transparent part of my texture.
Any suggestions?
Code:
// Init code...
gl.glEnable(gl.GL_TEXTURE_2D);
gl.glDisable(gl.GL_DITHER);
gl.glDisable(gl.GL_LIGHTING);
gl.glDisable(gl.GL_DEPTH_TEST);
gl.glTexEnvi(gl.GL_TEXTURE_ENV, gl.GL_TEXTURE_ENV_MODE, gl.GL_MODULATE);
// Drawing code...
gl.glBegin(gl.GL_QUADS);
gl.glTexCoord2d(0.0, 0.0);
gl.glVertex3f(0.0f, 0.0f, 0.0f);
gl.glTexCoord2d(1.0, 0.0);
gl.glVertex3f(1.0f, 0.0f, 0.0f);
gl.glTexCoord2d(1.0, 1.0);
gl.glVertex3f(1.0f, 1.0f, 0.0f);
gl.glTexCoord2d(0.0, 1.0);
gl.glVertex3f(0.0f, 1.0f, 0.0f);
gl.glEnd();
I've tried almost everything, from enabling blending to change to GL_REPLACE, however I can't get it to work.
Edit:
// Texture. Have tested both gl.GL_RGBA and gl.GL_RGB8.
gl.glTexImage2D(gl.GL_TEXTURE_2D, 0, (int)gl.GL_RGBA, imgWidth, imgHeight,
0, gl.GL_BGR_EXT, gl.GL_UNSIGNED_BYTE, bitmapdata.Scan0);
Check that your texture is of RGBA format, and enable blending and set the blending func:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
And draw the texture. If your texture is not RGBA, then there is no alpha and blending won't do anything.
EDIT: Since you posted your code, i can a spot a serious error:
glTexImage2D(gl.GL_TEXTURE_2D, 0, (int)gl.GL_RGBA, imgWidth, imgHeight, 0, gl.GL_BGR_EXT, gl.GL_UNSIGNED_BYTE, bitmapdata.Scan0);
You're telling GL that the texture has internalFormat RGBA, but the bitmap data has BGR format, so, no alpha from your texture data. This assumes alpha = 1.0.
To correct it, load your PNG with RGBA format and use GL_RGBA as internalFormat and format parameters for glTexImage2D.
When I'm drawing, the transparent part of the binded texture is white.
That means your PNG-parser converted transparent regions to the value of white. If you want to render transparent layers with OpenGL you dont typically depend on texture-files to hold the transparency but instead use the GLBlendFunc(). More information here:
http://www.opengl.org/resources/faq/technical/transparency.htm
Also, should you render to a frame buffer and copy the result into a texture, check that the frame buffer has alpha turned on. For example when using osgViewer this can be achived by (do this before calling setUpViewInWindow):
osg::DisplaySettings *pSet = myviewer.getDisplaySettings();
if(pSet == NULL)
pSet = new osg::DisplaySettings();
pSet->setMinimumNumAlphaBits(8);
myviewer.setDisplaySettings(pSet);
and under qt it should work with (from http://forum.openscenegraph.org/viewtopic.php?t=6411):
QGLFormat f;
f.setAlpha( true ); //enables alpha channel for this format
QGLFormat::setDefaultFormat( f ); //set it as default before instantiations
setupUi(this); //instantiates QGLWidget (ViewerQT)
Normally it is better to render directly into a frame buffer but I came alonge this while preparing some legacy code and in the beginning it was very hard to find this.

Setting glutBitmapCharacter color?

Just wondering if someone can help me track down my issue with the following code where the text color is not being set correctly (its just rendering whatever color is in the background)
void RenderText(int x, int y, const char *string)
{
int i, len;
glUseProgram(0);
glLoadIdentity();
glColor3f(1.0f, 1.0f, 1.0f);
glTranslatef(0.0f, 0.0f, -5.0f);
glRasterPos2i(x, y);
glDisable(GL_TEXTURE_2D);
for (i = 0, len = strlen(string); i < len; i++)
{
glutBitmapCharacter(GLUT_BITMAP_8_BY_13, (int)string[i]);
}
glEnable(GL_TEXTURE_2D);
}
I've checked all the usual things (I think), disabling texturing, setting color before rasterPos'ing, etc Ive disabled shaders but Im still having issues
Looks like you've forgotten to glDisable(GL_LIGHTING) before drawing your string.
No color is stored with any OpenGL bitmap (which is what glutBitmapCharacter created. The bitmap is monochrome and stores only shape.
When the bitmap is drawn (e.g. glBitmap or maybe glDrawLists), the current raster color is used. The raster color is not always the same as the active color, see http://www.opengl.org/wiki/Coloring_a_bitmap.
Color is usually controlled with the glColor3f function, thus if the text is white and shouldn't be then the following change should help:
glLoadIdentity();
glColor3f(0.5f, 0.5f, 0.5f); //<-- this line controls the color (now text is gray)
glTranslatef(0.0f, 0.0f, -5.0f);
glRasterPos2i(x, y);
Also, calling glDisable(GL_TEXTURE_2D) and glEnable(GL_TEXTURE_2D) is unnecessary. Instead you can just call glBindTexture(GL_TEXTURE_2D,0) to disable textures and then use the same function to set the active texture. Just make sure to call glEnable(GL_TEXTURE_2D) in your initialization function.

Alpha/texturing issues in an OpenGL wrapper

I'm in the process of writing a wrapper for some OpenGL functions. The goal is to wrap the context used by the game Neverwinter Nights, in order to apply post-processing shader effects. After learning OpenGL (this is my first attempt to use it) and much playing with DLLs and redirection, I have a somewhat working system.
However, when the post-processing fullscreen quad is active, all texturing and transparency drawn by the game are lost. This shouldn't be possible, because all my functions take effect after the game has completely finished its own rendering.
The code does not use renderbuffers or framebuffers (both refused to compile on my system in any way, with or with GLEW or GLee, despite being supported and usable by other programs). Eventually, I put together this code to handle copying the texture from the buffer and rendering a fullscreen quad:
extern "C" SEND BOOL WINAPI hook_wglSwapLayerBuffers(HDC h, UINT v)
{
if ( frameCount > 250 )
{
frameCount++;
if ( frameCount == 750 ) frameCount = 0;
if ( nwshader->thisframe == NULL )
{
createTextures();
}
glBindTexture(GL_TEXTURE_2D, nwshader->thisframe);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 0, 0, nwshader->width, nwshader->height, 0);
glClearColor(0.0f, 0.5f, 0.0f, 0.5f);
glClear(GL_COLOR_BUFFER_BIT);
glEnable(GL_TEXTURE_2D);
glDisable(GL_DEPTH_TEST);
glBlendFunc(GL_ONE, GL_ZERO);
glEnable(GL_BLEND);
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glOrtho( 0, nwshader->width , nwshader->height , 0, -1, 1 );
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
glColor4f(1.0f, 1.0f, 1.0f, 1.0f);
glBegin(GL_POLYGON);
glTexCoord2f(0.0f, 1.0f);
glVertex2d(0, 0);
glTexCoord2f(0.0f, 0.0f);
glVertex2d(0, nwshader->height);
glTexCoord2f(1.0f, 0.0f);
glVertex2d(nwshader->width, nwshader->height);
glTexCoord2f(1.0f, 1.0f);
glVertex2d(nwshader->width, 0);
glEnd();
glMatrixMode( GL_PROJECTION );
glPopMatrix();
glMatrixMode( GL_MODELVIEW );
glPopMatrix();
glEnable(GL_DEPTH_TEST);
glDisable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);
} else {
frameCount++;
}
if ( h == grabbedDevice )
{
Log->logline("Swapping buffer on cached device.");
}
return wglSwapLayerBuffers(h,v);
}
This code functions almost functions perfectly and has no notable slow-down. However, when it is active (I added the frameCount condition to turn it on and off every ~5 seconds), all alpha and texturing are completely ignored by the game renderer. I'm not turning off any kind of blending or texturing before this function (the only OpenGL calls are to create the nwshader->thisframe texture).
I was able to catch a few screenshots of what's happening:
Broken A: http://i4.photobucket.com/albums/y145/peachykeen000/outside_brokenA.png
Broken B: http://i4.photobucket.com/albums/y145/peachykeen000/outside_brokenB.png
(note, in B, the smoke in the back is not broken, it is correctly transparent. So is the HUD.)
Broken Interior: http://i4.photobucket.com/albums/y145/peachykeen000/transparency_broken.png
Correct Interior (for comparison): http://i4.photobucket.com/albums/y145/peachykeen000/transparency_proper.png
The drawing of the quad also breaks menus, turning the whole thing into a black surface with a single white box. I suspect it is a problem with either depth or how the game is drawing certain objects, or a state that is not being reset properly. I've used GLintercept to dump a full log of all calls in a frame, and didn't see anything wrong (the call to wglSwapLayerBuffers is always last).
Being brand new to working with OpenGL, I really have no clue what's going wrong (or how to fix it) and nothing I've tried has helped. What am I missing?
I don't quite understand how your code is supposed to integrate with the Neverwinter Nights code. However...
It seems like you're most likely changing some setting that the existing code didn't expect to change.
Based on the description of the problem, I'd try removing the following line:
glDisable(GL_TEXTURE_2D);
That line disables textures, which certainly sounds like the problem you're seeing.