Related
I would like to make a game that is internally 320x240, but renders to the screen at whole number multiples of this (640x480, 960,720, etc). I am going for retro 2D pixel graphics.
I have achieved this by setting the internal resolution via glOrtho():
glOrtho(0, 320, 240, 0, 0, 1);
And then I scale up the output resolution by a factor of 3, like this:
glViewport(0,0,960,720);
window = SDL_CreateWindow("Title", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, 960, 720, SDL_WINDOW_OPENGL);
I draw rectangles like this:
glBegin(GL_LINE_LOOP);
glVertex2f(rect_x, rect_y);
glVertex2f(rect_x + rect_w, rect_y);
glVertex2f(rect_x + dst_w, dst_y + dst_h);
glVertex2f(rect_x, rect_y + rect_h);
glEnd();
It works perfectly at 320x240 (not scaled):
When I scale up to 960x720, the pixel rendering all works just fine! However it seems the GL_Line_Loop is not drawn on a 320x240 canvas and scaled up, but drawn on the final 960x720 canvas. The result is 1px lines in a 3px world :(
How do I draw lines to the 320x240 glOrtho canvas, instead of the 960x720 output canvas?
There is no "320x240 glOrtho canvas". There is only the window's actual resolution: 960x720.
All you are doing is scaling up the coordinates of the primitives you render. So, your code says to render a line from, for example, (20, 20) to (40, 40). And OpenGL (eventually) scales those coordinates by 3 in each dimension: (60, 60) and (120x120).
But that's only dealing with the end points. What happens in the middle is still based on the fact that you're rendering at the window's actual resolution.
Even if you employed glLineWidth to change the width of your lines, that would only fix the line widths. It would not fix the fact that the rasterization of lines is based on the actual resolution you're rendering at. So diagonal lines won't have the pixelated appearance you likely want.
The only way to do this properly is to, well, do it properly. Render to an image that is actual 320x240, then draw it to the window's actual resolution.
You'll have to create a texture of that size, then attach it to a framebuffer object. Bind the FBO for rendering and render to it (with the viewport set to the image's size). Then unbind the FBO, and draw that texture to the window (with the viewport set to the window's resolution).
As I mentioned in my comment Intel OpenGL drivers has problems with direct rendering to texture and I do not know of any workaround that is working. In such case the only way around this is use glReadPixels to copy screen content into CPU memory and then copy it back to GPU as texture. Of coarse that is much much slower then direct rendering to texture. So here is the deal:
set low res view
do not change resolution of your window just the glViewport values. Then render your scene in the low res (in just a fraction of screen space)
copy rendered screen into texture
set target resolution view
render the texture
do not forget to use GL_NEAREST filter. The most important thing is that you swap buffers only after this not before !!! otherwise you would have flickering.
Here C++ source for this:
void gl_draw()
{
// render resolution and multiplier
const int xs=320,ys=200,m=2;
// [low res render pass]
glViewport(0,0,xs,ys);
glClearColor(0.0,0.0,0.0,1.0);
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glDisable(GL_DEPTH_TEST);
glDisable(GL_TEXTURE_2D);
// 50 random lines
RandSeed=0x12345678;
glColor3f(1.0,1.0,1.0);
glBegin(GL_LINES);
for (int i=0;i<100;i++)
glVertex2f(2.0*Random()-1.0,2.0*Random()-1.0);
glEnd();
// [multiply resiolution render pass]
static bool _init=true;
GLuint txrid=0; // texture id
BYTE map[xs*ys*3]; // RGB
// init texture
if (_init) // you should also delte the texture on exit of app ...
{
// create texture
_init=false;
glGenTextures(1,&txrid);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,txrid);
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,GL_NEAREST); // must be nearest !!!
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE,GL_COPY);
glDisable(GL_TEXTURE_2D);
}
// copy low res screen to CPU memory
glReadPixels(0,0,xs,ys,GL_RGB,GL_UNSIGNED_BYTE,map);
// and then to GPU texture
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,txrid);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, xs, ys, 0, GL_RGB, GL_UNSIGNED_BYTE, map);
// set multiplied resolution view
glViewport(0,0,m*xs,m*ys);
glClear(GL_COLOR_BUFFER_BIT);
// render low res screen as texture
glBegin(GL_QUADS);
glTexCoord2f(0.0,0.0); glVertex2f(-1.0,-1.0);
glTexCoord2f(0.0,1.0); glVertex2f(-1.0,+1.0);
glTexCoord2f(1.0,1.0); glVertex2f(+1.0,+1.0);
glTexCoord2f(1.0,0.0); glVertex2f(+1.0,-1.0);
glEnd();
glDisable(GL_TEXTURE_2D);
glFlush();
SwapBuffers(hdc); // swap buffers only here !!!
}
And preview:
I tested this on some Intel HD graphics (god knows which version) I got at my disposal and it works (while standard render to texture approaches are not).
I would like to make a game that is internally 320x240, but renders to the screen at whole number multiples of this (640x480, 960,720, etc). I am going for retro 2D pixel graphics.
I have achieved this by setting the internal resolution via glOrtho():
glOrtho(0, 320, 240, 0, 0, 1);
And then I scale up the output resolution by a factor of 3, like this:
glViewport(0,0,960,720);
window = SDL_CreateWindow("Title", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, 960, 720, SDL_WINDOW_OPENGL);
I draw rectangles like this:
glBegin(GL_LINE_LOOP);
glVertex2f(rect_x, rect_y);
glVertex2f(rect_x + rect_w, rect_y);
glVertex2f(rect_x + dst_w, dst_y + dst_h);
glVertex2f(rect_x, rect_y + rect_h);
glEnd();
It works perfectly at 320x240 (not scaled):
When I scale up to 960x720, the pixel rendering all works just fine! However it seems the GL_Line_Loop is not drawn on a 320x240 canvas and scaled up, but drawn on the final 960x720 canvas. The result is 1px lines in a 3px world :(
How do I draw lines to the 320x240 glOrtho canvas, instead of the 960x720 output canvas?
There is no "320x240 glOrtho canvas". There is only the window's actual resolution: 960x720.
All you are doing is scaling up the coordinates of the primitives you render. So, your code says to render a line from, for example, (20, 20) to (40, 40). And OpenGL (eventually) scales those coordinates by 3 in each dimension: (60, 60) and (120x120).
But that's only dealing with the end points. What happens in the middle is still based on the fact that you're rendering at the window's actual resolution.
Even if you employed glLineWidth to change the width of your lines, that would only fix the line widths. It would not fix the fact that the rasterization of lines is based on the actual resolution you're rendering at. So diagonal lines won't have the pixelated appearance you likely want.
The only way to do this properly is to, well, do it properly. Render to an image that is actual 320x240, then draw it to the window's actual resolution.
You'll have to create a texture of that size, then attach it to a framebuffer object. Bind the FBO for rendering and render to it (with the viewport set to the image's size). Then unbind the FBO, and draw that texture to the window (with the viewport set to the window's resolution).
As I mentioned in my comment Intel OpenGL drivers has problems with direct rendering to texture and I do not know of any workaround that is working. In such case the only way around this is use glReadPixels to copy screen content into CPU memory and then copy it back to GPU as texture. Of coarse that is much much slower then direct rendering to texture. So here is the deal:
set low res view
do not change resolution of your window just the glViewport values. Then render your scene in the low res (in just a fraction of screen space)
copy rendered screen into texture
set target resolution view
render the texture
do not forget to use GL_NEAREST filter. The most important thing is that you swap buffers only after this not before !!! otherwise you would have flickering.
Here C++ source for this:
void gl_draw()
{
// render resolution and multiplier
const int xs=320,ys=200,m=2;
// [low res render pass]
glViewport(0,0,xs,ys);
glClearColor(0.0,0.0,0.0,1.0);
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glDisable(GL_DEPTH_TEST);
glDisable(GL_TEXTURE_2D);
// 50 random lines
RandSeed=0x12345678;
glColor3f(1.0,1.0,1.0);
glBegin(GL_LINES);
for (int i=0;i<100;i++)
glVertex2f(2.0*Random()-1.0,2.0*Random()-1.0);
glEnd();
// [multiply resiolution render pass]
static bool _init=true;
GLuint txrid=0; // texture id
BYTE map[xs*ys*3]; // RGB
// init texture
if (_init) // you should also delte the texture on exit of app ...
{
// create texture
_init=false;
glGenTextures(1,&txrid);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,txrid);
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,GL_NEAREST); // must be nearest !!!
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE,GL_COPY);
glDisable(GL_TEXTURE_2D);
}
// copy low res screen to CPU memory
glReadPixels(0,0,xs,ys,GL_RGB,GL_UNSIGNED_BYTE,map);
// and then to GPU texture
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,txrid);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, xs, ys, 0, GL_RGB, GL_UNSIGNED_BYTE, map);
// set multiplied resolution view
glViewport(0,0,m*xs,m*ys);
glClear(GL_COLOR_BUFFER_BIT);
// render low res screen as texture
glBegin(GL_QUADS);
glTexCoord2f(0.0,0.0); glVertex2f(-1.0,-1.0);
glTexCoord2f(0.0,1.0); glVertex2f(-1.0,+1.0);
glTexCoord2f(1.0,1.0); glVertex2f(+1.0,+1.0);
glTexCoord2f(1.0,0.0); glVertex2f(+1.0,-1.0);
glEnd();
glDisable(GL_TEXTURE_2D);
glFlush();
SwapBuffers(hdc); // swap buffers only here !!!
}
And preview:
I tested this on some Intel HD graphics (god knows which version) I got at my disposal and it works (while standard render to texture approaches are not).
When I render my text using TTF_RenderUTF8_Blended I obtain a solid rectangle on the screen. The color depends on the one I choose, in my case the rectangle is red.
My question
What am I missing? It seems like I'm not getting the proper Alpha values from the surface generated with SDL_DisplayFormatAlpha(TTF_RenderUTF8_Blended( ... )), or am I? Does anyone recognize or know the problem?
Additionnal informations
If I use TTF_RenderUTF8_Solid or TTF_RenderUTF8_Shaded the text is drawn properly, but not blended of course.
I am also drawing other textures on the screen, so I draw the text last to ensure the blending will take into account the current surface.
Edit:SDL_Color g_textColor = {255, 0, 0, 0}; <-- I tried with and without the alpha value, but I get the same result.
I have tried to summarize the code without removing too much details. Variables prefixed with "g_" are global.
Init() function
// This function creates the required texture.
bool Init()
{
// ...
g_pFont = TTF_OpenFont("../arial.ttf", 12);
if(g_pFont == NULL)
return false;
// Write text to surface
g_pText = SDL_DisplayFormatAlpha(TTF_RenderUTF8_Blended(g_pFont, "My first Text!", g_textColor)); //< Doesn't work
// Note that Solid and Shaded Does work properly if I uncomment them.
//g_pText = SDL_DisplayFormatAlpha(TTF_RenderUTF8_Solid(g_pFont, "My first Text!", g_textColor));
//g_pText = SDL_DisplayFormatAlpha(TTF_RenderUTF8_Shaded(g_pFont, "My first Text!", g_textColor, g_bgColor));
if(g_pText == NULL)
return false;
// Prepare the texture for the font
GLenum textFormat;
if(g_pText->format->BytesPerPixel == 4)
{
// alpha
if(g_pText->format->Rmask == 0x000000ff)
textFormat = GL_RGBA;
else
textFormat = GL_BGRA_EXT;
}
// Create the font's texture
glGenTextures(1, &g_FontTextureId);
glBindTexture(GL_TEXTURE_2D, g_FontTextureId);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, g_pText->format->BytesPerPixel, g_pText->w, g_pText->h, 0, textFormat, GL_UNSIGNED_BYTE, g_pText->pixels);
// ...
}
DrawText() function
// this function is called each frame
void DrawText()
{
SDL_Rect sourceRect;
sourceRect.x = 0;
sourceRect.y = 0;
sourceRect.h = 10;
sourceRect.w = 173;
// DestRect is null so the rect is drawn at 0,0
SDL_BlitSurface(g_pText, &sourceRect, g_pSurfaceDisplay, NULL);
glBindTexture(GL_TEXTURE_2D, g_FontTextureId);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBegin( GL_QUADS );
glTexCoord2f(0.0f, 0.0f);
glVertex2f(0.0f, 0.0f);
glTexCoord2f(0.0f, 1.0f);
glVertex2f(0.0f, 10.0f);
glTexCoord2f(1.0f, 1.0f);
glVertex2f(173.0f, 10.0f);
glTexCoord2f(1.0f, 0.0f);
glVertex2f(173.0f, 0.0f);
glEnd();
glDisable(GL_BLEND);
glDisable(GL_TEXTURE_2D);
}
You've made a fairly common mistake. It's on the OpenGL end of things.
When you render the textured quad in DrawText(), you enable OpenGL's blending capability, but you never specify the blending function (i.e. how it should be blended)!
You need this code to enable regular alpha-blending in OpenGL:
glEnable( GL_BLEND );
glBlendFunc( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA );
This info used to be on the OpenGL website, but I can't find it now.
That should stop it from coming out solid red. The reasons the others worked is because they're not alpha-blended, they're actually just red-on-black images with no alpha, so the blending function doesn't matter. But the blended one only contains red color, with an alpha channel to make it less-red.
I notice a few other small problems in your program though.
In the DrawText() function, you are blitting the surface using SDL and rendering with OpenGL. You should not use regular SDL blitting when using OpenGL; it doesn't work. So this line should not be there:
SDL_BlitSurface(g_pText, &sourceRect, g_pSurfaceDisplay, NULL);
Also, this line leaks memory:
g_pText = SDL_DisplayFormatAlpha( TTF_RenderUTF8_Blended(...) );
TTF_RenderUTF8_Blended() returns a pointer to SDL_Surface, which must be freed with SDL_FreeSurface(). Since you're passing it into SDL_DisplayFormatAlpha(), you lose track of it, and it never gets freed (hence the memory leak).
The good news is that you don't need SDL_DisplayFormatAlpha here because TTF_RenderUTF8_Blended returns a 32-bit surface with an alpha-channel anyway! So you can rewrite this line as:
g_pText = TTF_RenderUTF8_Blended(g_pFont, "My first Text!", g_textColor);
I am trying to figure out the best way to mask of sections of a texture when they ar drawn. My issue comes in the fact that I seem to have run our of alpha masks!
We are using openGL to draw a custom built 2D game engine. The game is built up off of sprites and simple block textures.
My desired outcome is like this:
A character sprite is drawn in place (using it's alpha color to not just be a box)
An item is drawn into the players hand (also using it's alpha color to draw into the scene without being a box)
The item should appear behind the characters arm/hand, but above the rest of the body.
For the moment the only way I can figure out how to accomplish this, is by drawing them in order (Body, Item, Arm) but I would like to avoid this to make art assets a bit easier to deal with. My idea solution would be to draw the character, then draw the item with an alpha mask that blocks out areas of the texture that should be "under" the arm.
Other solutions that I have seen are like this, where the glBlendFuncSeparate() function is used. I am trying to avoid bringing in extensions, as my current version of OpenGL doesn't support it. Not to say that I am opposed to the idea, but it seems a bit of a handle to brig it in just to draw an alpha mask?
I fully admit that this is a learning process for me, and I am using it as an excuse to really see how OpenGL handles. Any suggestions as to where I should head to get this to draw correctly? Is there a way for OpenGL in the fixed pipeline to take a texture, apply an alpha mask on top of it, and THEN draw it into the buffer? Should I give in and separate my character into several parts of its model?
[UPDATE: 8/12/12]
Tried to add the code suggested by Tim, but I seem to be having an issue. When I enable the stencil buffer, everything just gets blocked out, NOT just what I wanted. Here is my test example code.
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
// Disable writing to any of the color fields
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
glStencilOp(GL_KEEP, GL_KEEP, GL_INCR);
glStencilFunc(GL_ALWAYS, 0,0);
// Draw our blocking poly
glBegin(GL_POLYGON);
glVertex2f( 50, 50 );
glVertex2f( 50, 50+128 );
glVertex2f( 50+128, 50+128 );
glEnd();
glStencilFunc(GL_GREATER, 0, -1);
glEnable(GL_STENCIL_TEST);
// Re enable drawing of colors
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
// Enable use of textures
glEnable(GL_TEXTURE_2D);
// Bind desired texture for drawing
glBindTexture(GL_TEXTURE_2D,(&texture)[0]);
// Draw the box with colors
glBegin(GL_QUADS);
glTexCoord2d( 0, 0 ); glVertex2f( 50, 50 );
glTexCoord2d( 0, 1 ); glVertex2f( 50, 50+128 );
glTexCoord2d( 1, 1 ); glVertex2f( 50+128, 50+128 );
glTexCoord2d( 1, 0 ); glVertex2f( 50+128, 50 );
glEnd();
// Swap buffers and display!
SDL_GL_SwapBuffers();
Just to be clear, here is my init code as well to set this system up.
When the code is run with stencil disabled, I get this:
When I use glEnable(GL_STENCIL_TEST), I get this:
I've tried playing around with various options, but I cannot see a clear reason why my stencil buffer is blocking everything.
[Update#2 8/12/12]
We got some working code, Thanks tim! Here is what I ended up running to work correctly.
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
// Disable writing to any of the color fields
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
glStencilOp(GL_INCR, GL_INCR, GL_INCR);
glEnable(GL_STENCIL_TEST);
// Draw our blocking poly
glBegin(GL_POLYGON);
glVertex2f( 50, 50 );
glVertex2f( 50, 50+128 );
glVertex2f( 50+128, 50+128 );
glEnd();
glStencilFunc(GL_EQUAL, 1, 1);
glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP);
// Re enable drawing of colors
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
// Enable use of textures
glEnable(GL_TEXTURE_2D);
// Bind desired texture for drawing
glBindTexture(GL_TEXTURE_2D,(&texture)[0]);
// Draw the box with colors
glBegin(GL_QUADS);
glTexCoord2d( 0, 0 ); glVertex2f( 50, 50 );
glTexCoord2d( 0, 1 ); glVertex2f( 50, 50+128 );
glTexCoord2d( 1, 1 ); glVertex2f( 50+128, 50+128 );
glTexCoord2d( 1, 0 ); glVertex2f( 50+128, 50 );
glEnd();
glDisable(GL_STENCIL_TEST);
// Swap buffers and display!
SDL_GL_SwapBuffers();
Here's my idea for the situation where you have one texture and one alpha mask:
Draw the character onto the scene like normal.
Lock the RGB color channels so that it cannot be changed with glColorMask
Setup the stencil buffer with glStencilOp(GL_KEEP, GL_KEEP, GL_INCR); glStencilFunc(GL_ALWAYS, 0,0);
Draw the alpha mask with alpha testing enabled. This will increment the stencil buffer anywhere the alpha test passes (you may have to flip this based on your mask polarity)
At this point, you have a character texture in the framebuffer, and a mask outline in the stencil buffer.
Reenable the color channels with glColorMask
Setup the stencil buffer for the weapon with glStencilFunc(GL_GREATER, 0, -1); This will only draw the weapon texels where the stencil buffer is greater than zero, and reject pixels where the stencil is not updated.
Draw the weapon texture as normal.
Tim was pretty clear in his comment, but I want to present you the solution I find the most intuitive. It's 3D, so hold on... ;)
Basically, you can just use the Z coordinate of your images to create virtual "layers". It then doesnt' matter, in which order you draw them. Just alphatest every image individually, and draw it on correct Z value. If it still isn't enough, you could use separate texture containing "depth" of every pixel, and then use the 2nd texture to perform some sort of depth-testing.
Be sure to call glEnable(GL_DEPTH_TEST); if you want to use this approach.
As I see it, the problem is that you have one texture, but part of it represents the arm and part of it the rest of the character. The issue is that you want to draw the weapon over the character, but draw the arm over both.
This means, while drawing two objects, you want to put them into three different "layers". This fundamentally doesn't make sense, so you're kind of stuck.
Here's an idea though: use a fragment program (i.e., a shader).
I suggest you overload the character's texture's alpha channel to encode both transparency and layer. For example, let's use 0=transparent body, 64=opaque body, 128=transparent arm, 255=opaque arm.
From here, you draw your objects, but conditionally set the depth of your objects into three layers. Basically, you write a fragment program that draws your character into two different layers, the character gets pushed backward while the arm gets pulled forward. When the weapon is drawn, it is drawn without a shader, but it's tested against the characters' pixels' depths. It works something like this (untested, obviously).
Define a shader my_shader, which contains a fragment program:
uniform sampler2D character_texture;
void main(void) {
vec4 sample = texture2D(character_texture,gl_TexCoord[0].st);
int type; //Figure out what type of character texel we're looking at
if (fabs(sample.a-0.00)<0.01) type = 0; //transparent body
else if (fabs(sample.a-0.25)<0.01) type = 1; //opaque body
else if (fabs(sample.a-0.50)<0.01) type = 2; //transparent arm
else if (fabs(sample.a-1.00)<0.01) type = 3; //opaque arm
//Don't draw transparent pixels.
if (type==0 || type==2) discard;
gl_FragColor = vec4(sample.rgb,1.0);
//Normally, you (can) write "gl_FragDepth = gl_FragCoord.z". This
//is how OpenGL will draw your weapon. However, for the character,
//we alter that so that the arm is closer and the body is farther.
//Move body farther
if (type==1) gl_FragDepth = gl_FragCoord.z * 1.1;
//Move arm closer
else if (type==3) gl_FragDepth = gl_FragCoord.z * 0.9;
}
Here's some pseudocode for your draw function:
//...
//Algorithm to draw your character
glUseProgram(my_shader);
glBindTexture(GL_TEXTURE_2D,character.texture.texture_gl_id);
glUniform1i(glGetUniformLocation(my_shader,"character_texture"),1);
character.draw();
glUseProgram(0);
//Draw your weapon
glEnable(GL_DEPTH_TEST);
character.weapon.draw();
glDisable(GL_DEPTH_TEST);
//...
I'm in the process of writing a wrapper for some OpenGL functions. The goal is to wrap the context used by the game Neverwinter Nights, in order to apply post-processing shader effects. After learning OpenGL (this is my first attempt to use it) and much playing with DLLs and redirection, I have a somewhat working system.
However, when the post-processing fullscreen quad is active, all texturing and transparency drawn by the game are lost. This shouldn't be possible, because all my functions take effect after the game has completely finished its own rendering.
The code does not use renderbuffers or framebuffers (both refused to compile on my system in any way, with or with GLEW or GLee, despite being supported and usable by other programs). Eventually, I put together this code to handle copying the texture from the buffer and rendering a fullscreen quad:
extern "C" SEND BOOL WINAPI hook_wglSwapLayerBuffers(HDC h, UINT v)
{
if ( frameCount > 250 )
{
frameCount++;
if ( frameCount == 750 ) frameCount = 0;
if ( nwshader->thisframe == NULL )
{
createTextures();
}
glBindTexture(GL_TEXTURE_2D, nwshader->thisframe);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 0, 0, nwshader->width, nwshader->height, 0);
glClearColor(0.0f, 0.5f, 0.0f, 0.5f);
glClear(GL_COLOR_BUFFER_BIT);
glEnable(GL_TEXTURE_2D);
glDisable(GL_DEPTH_TEST);
glBlendFunc(GL_ONE, GL_ZERO);
glEnable(GL_BLEND);
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glOrtho( 0, nwshader->width , nwshader->height , 0, -1, 1 );
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
glColor4f(1.0f, 1.0f, 1.0f, 1.0f);
glBegin(GL_POLYGON);
glTexCoord2f(0.0f, 1.0f);
glVertex2d(0, 0);
glTexCoord2f(0.0f, 0.0f);
glVertex2d(0, nwshader->height);
glTexCoord2f(1.0f, 0.0f);
glVertex2d(nwshader->width, nwshader->height);
glTexCoord2f(1.0f, 1.0f);
glVertex2d(nwshader->width, 0);
glEnd();
glMatrixMode( GL_PROJECTION );
glPopMatrix();
glMatrixMode( GL_MODELVIEW );
glPopMatrix();
glEnable(GL_DEPTH_TEST);
glDisable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);
} else {
frameCount++;
}
if ( h == grabbedDevice )
{
Log->logline("Swapping buffer on cached device.");
}
return wglSwapLayerBuffers(h,v);
}
This code functions almost functions perfectly and has no notable slow-down. However, when it is active (I added the frameCount condition to turn it on and off every ~5 seconds), all alpha and texturing are completely ignored by the game renderer. I'm not turning off any kind of blending or texturing before this function (the only OpenGL calls are to create the nwshader->thisframe texture).
I was able to catch a few screenshots of what's happening:
Broken A: http://i4.photobucket.com/albums/y145/peachykeen000/outside_brokenA.png
Broken B: http://i4.photobucket.com/albums/y145/peachykeen000/outside_brokenB.png
(note, in B, the smoke in the back is not broken, it is correctly transparent. So is the HUD.)
Broken Interior: http://i4.photobucket.com/albums/y145/peachykeen000/transparency_broken.png
Correct Interior (for comparison): http://i4.photobucket.com/albums/y145/peachykeen000/transparency_proper.png
The drawing of the quad also breaks menus, turning the whole thing into a black surface with a single white box. I suspect it is a problem with either depth or how the game is drawing certain objects, or a state that is not being reset properly. I've used GLintercept to dump a full log of all calls in a frame, and didn't see anything wrong (the call to wglSwapLayerBuffers is always last).
Being brand new to working with OpenGL, I really have no clue what's going wrong (or how to fix it) and nothing I've tried has helped. What am I missing?
I don't quite understand how your code is supposed to integrate with the Neverwinter Nights code. However...
It seems like you're most likely changing some setting that the existing code didn't expect to change.
Based on the description of the problem, I'd try removing the following line:
glDisable(GL_TEXTURE_2D);
That line disables textures, which certainly sounds like the problem you're seeing.