OpenGL Depth Buffer Problem - opengl

For my last few projects I have been using some of the utility files that I found whilst looking at a few demos here.
Namely a file called opengl.h - mainly used to manage shaders a bit like glew and another file gl_font.
gl_font is a class they use to render fonts on screen using vertex buffer objects.
However, when I use this to render the framerate in my game it draws everything but the skybox correctly. For some reason the skybox is rendered white as seen here, if I do not render the font it looks like this.
Here are some parts of the gl_font class that I think are most important:
void GLFont::begin()
{
HWND hWnd = GetForegroundWindow();
RECT rcClient;
GetClientRect(hWnd, &rcClient);
int w = rcClient.right - rcClient.left;
int h = rcClient.bottom - rcClient.top;
glPushAttrib(GL_CURRENT_BIT | GL_LIGHTING_BIT);
glDisable(GL_LIGHTING);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, m_fontTexture);
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glOrtho(0.0f, w, h, 0.0f, -1.0f, 1.0f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glBindBuffer(GL_ARRAY_BUFFER, m_vertexBuffer);
drawTextBegin();
}
I have trie changing glPushAttrib(GL_CURRENT_BIT | GL_LIGHTING_BIT); to glPushAttrib(GL_CURRENT_BIT | GL_LIGHTING_BIT | GL_TEXTURE_BIT); and the background texture returns, but the font isn't rendered.
void GLFont::end()
{
drawTextEnd();
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindTexture(GL_TEXTURE_2D, 0);
glDisable(GL_TEXTURE_2D);
glDisable(GL_BLEND);
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
glPopAttrib();
}
This is an image of the depth buffer when the font is rendered and this is what is looks like when it is not.
Could anyone shed some light on this problem please?
Any help would be much appreciated!
Thanks.

Looks like begin() lacks a glPushMatrix() after glMatrixMode(GL_MODELVIEW). This might cause the scene to be rendered incorrectly when some text is also rendered.
Didn't glGetError() report a GL_STACK_UNDERFLOW error?

Related

Saving a specific Camera View in OpenGL as Image

I have a 3D scene with an object and I would like to save a view of that object that is different from the one of the current screen I look at.
So I thought I'd just have to do something like this (pseudo code):
PushMatrix()
LoadIdentity()
TranslateAndRotate()
gluperspective()
setViewport()
DrawScene()
saveScreenshot()
PopMatrix()
But I only get a picture of the current view of my camera, not the one I specified.
Did I forget something?
EDIT:
Because of the answer below, I tried the following code:
void ScenePhotograph(GLubyte* Target, float *Translation, float RotationAroundY)
{
glMatrixMode(GL_PROJECTION);
gluPerspective(54.0f, (GLfloat)openGLControl1->Width / (GLfloat)openGLControl1->Height, 1.0f, 3000.0f);
glViewport(0,0,openGLControl1->Width, openGLControl1->Height);
glMatrixMode(GL_MODELVIEW);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glTranslatef(Translation[0],Translation[1],Translation[2]);
glRotatef(RotationAroundY, 0,1,0);
openGLControl1_OnDrawGL(NULL,System::EventArgs::Empty);
openGLControl1->Refresh();
glReadPixels(0, 0, openGLControl1->Width, openGLControl1->Height, GL_RGB, GL_UNSIGNED_BYTE, Target);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
cam->SetView();
openGLControl1_OnDrawGL(NULL,System::EventArgs::Empty);
openGLControl1->Refresh();
glutSwapBuffers();
}
That is giving me an access violation at glutSwapBuffers();
Any Ideas?
First, make sure you are not mixing diferent matrix in your code. To get a screenshot, you will need to position your camera exactly as you normally do to view it on your sccreen, but, before you swap the buffers, you read the pixels from the current framebuffer and store it as an image.
So, what you need is something like this:
glMatrixMode(GL_PROJECTION);
gluPerspective();
glMatrixMode(GL_MODELVIEW);
glClear(); // clear buffers here
loadIdendity();
setCameraPosition();
TranslateRotate();
DrawScene();
screenShot();
// do again to set your camera to correct position
glClear(); // clear buffers here
loadIdendity();
setCameraPosition();
TranslateRotate();
DrawScene();
swapBuffers();
as you can see, screenShot take care of reading the pixels from your current framebuffer and save it as an image. So do everything again to position your camera to the correct place

Drawing points with GL_POINTS and GL_POINT_SMOOTH results it two pixel tall points?

I have the following:
glHint(GL_POINT_SMOOTH, GL_NICEST);
glEnable(GL_POINT_SMOOTH);
The issue is that when I attempt to draw a pixel:
DrawPoints(float x1,float y1)
{
glBlendFunc(GL_DST_ALPHA,GL_ONE_MINUS_DST_ALPHA);
glPointSize(1);
glBegin(GL_POINTS);
glVertex2f(x1, y1);
glEnd();
}
I get points on the screen that are two pixels wide, and two pixels tall. The pixels that were two pixels wide were solved by making sure that x1 is called with a .5 at the end. Forcing the y1 variable to end in .5 did not fix the height issue. The points are always 2 pixels tall despite being set to be only one.
How could I solve this?
EDIT:
Took a screen shot of the issue in question. It is drawing out a sine wave on the screen.
EDIT 2:
Here's the full initialization code:
if(SDL_Init(SDL_INIT_EVERYTHING) < 0) {
fprintf(stderr,"%s:%d\n SDL_Init call failed.\n",__FILE__,__LINE__);
return false;
}
if((Surf_Display = SDL_SetVideoMode(WWIDTH, WHEIGHT, 32, SDL_HWSURFACE | SDL_GL_DOUBLEBUFFER | SDL_OPENGL)) == NULL) {
fprintf(stderr,"%s:%d\n SDL_SetVideoMode call failed.\n",__FILE__,__LINE__);
return false;
}
// Init GL system
glClearColor(0, 0, 0, 1);
glClearDepth(1.0f);
glViewport(0, 0, WWIDTH, WHEIGHT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, WWIDTH, WHEIGHT, 0, 1, -1);
glMatrixMode(GL_MODELVIEW);
glEnable(GL_TEXTURE_2D);
glHint(GL_POINT_SMOOTH, GL_NICEST);
glHint(GL_LINE_SMOOTH, GL_NICEST);
glHint(GL_POLYGON_SMOOTH, GL_NICEST);
glEnable(GL_POINT_SMOOTH);
glEnable(GL_LINE_SMOOTH);
glEnable(GL_POLYGON_SMOOTH);
glLoadIdentity();
Making sure that glEnable(GL_BLEND); has been called did not help either.
Another note, calling SDL_GL_SetAttribute(SDL_GL_MULTISAMPLESAMPLES, 2); had no effect on the pixel height either.
I found the answer to what went wrong.
This code caused the error:
glHint(GL_POINT_SMOOTH, GL_NICEST);
glHint(GL_LINE_SMOOTH, GL_NICEST);
glHint(GL_POLYGON_SMOOTH, GL_NICEST);
These are not the correct flags to give to glHint.
This code fixed it and got rid of a ENUM error that OpenGL was throwing. I found that one through debugging whole other issue. Serves me right for not checking error states!
glHint(GL_POINT_SMOOTH_HINT, GL_NICEST);
glHint(GL_LINE_SMOOTH_HINT, GL_NICEST);
glHint(GL_POLYGON_SMOOTH_HINT, GL_NICEST);

Why is my HUD not rendering in OpenGL?

I'm new to OpenGL, and I've been going through NeHe's tutorials and various other web sources, and I'm testing some things to render text as a HUD of sorts over everything else.
After a very long night, I can't get this to work and I can't find any solutions here that work, so I thought I'd ask.
My code:
GLvoid glLoadHUD(GLvoid)
{
glPushAttrib(GL_LIGHTING_BIT |
GL_DEPTH_BUFFER_BIT |
GL_TEXTURE_BIT);
glDisable(GL_LIGHTING);
glDisable(GL_DEPTH_TEST);
glDisable(GL_TEXTURE_2D);
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glOrtho(0.0f, 1.0f, 0.0f, 1.0f, -1.0f, 1.0f);
glRasterPos2f(0.1f, 0.6f);
glColor3f(1.0f,1.0f,1.0f);
glPrint("Test.");
glRasterPos2f(0.0f, 0.0f);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glPopAttrib();
glEnable(GL_TEXTURE_2D);
glEnable(GL_LIGHTING);
glEnable(GL_DEPTH_TEST);
}
Which is the code to render the text, and this is the code for drawing the scene:
int DrawGLScene(GLvoid)
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clears buffers
glLoadIdentity();
// If I put glLoadHUD(); here, it renders but the models render over it,
// which is useless.
for (xloop = 0; xloop < 3;)
{
glLoadIdentity();
glTranslatef(-4.0f+(float(xloop)*4.0f),0.0f,-12.0f);
glCallList(dlstBox); // This is the call to create a box.
xloop++;
}
glLoadHUD(); // If I put it here though, it doesn't render at all.
return TRUE;
}
Thank you in advance for any help you could give, I know I'm pretty green and I'm sure it's staring me right in the face, but this is driving me mad and I'm not sure how to make it work.
With glLoadHud after the rest of the scene, your MODELVIEW matrix is still on the stack, and you do not clear it as part of glLoadHud. Thus all of the glTranslatef translations that you accumulate during the scene are still active when you're drawing the hud, which translates it right out of your viewable window.
Clear the MODELVIEW matrix as part of the start of glLoadHud and see if that makes a difference.
It might be printing inside your z-clipping so it will not show up on your screen. So, move out of the screen a little bit and see if it shows up.

Rendering from SDL and OpenGL at the Same Time

Hey all, I'm very new to OpenGL (just started seriously programming with it today) and I'm trying to use it to give my SDL games a 3D boost. I've setup a small test program below:
#include <SDL/SDL.h>
#include <gl/gl.h>
int main(int argc, char *argv[])
{
SDL_Event event;
float theta = 0.0f;
SDL_Init(SDL_INIT_VIDEO);
SDL_Surface *screen = SDL_SetVideoMode(800, 600, 32, SDL_OPENGL | SDL_HWSURFACE | SDL_RESIZABLE | SDL_FULLSCREEN);
glViewport(0, 0, 800, 600);
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClearDepth(1.0);
glDepthFunc(GL_LESS);
glEnable(GL_DEPTH_TEST);
glShadeModel(GL_SMOOTH);
glMatrixMode(GL_PROJECTION);
glMatrixMode(GL_MODELVIEW);
int done;
for(done = 0; !done;)
{
SDL_FillRect(screen, 0, SDL_MapRGB(screen->format, 255, 0, 0));
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glTranslatef(0.0f,0.0f,0.0f);
glRotatef(theta, 0.0f, 0.0f, 1.0f);
glBegin(GL_TRIANGLES);
glColor3f(0.83f, 0.83f, 0.0f);
glVertex2f(0.0f, 1.0f);
glColor3f(0.83f, 0.83f, 0.0f);
glVertex2f(0.87f, -0.5f);
glColor3f(0.83f, 0.83f, 0.0f);
glVertex2f(-0.87f, -0.5f);
glEnd();
theta += 10.0f;
SDL_Flip(screen);
SDL_GL_SwapBuffers();
SDL_PollEvent(&event);
if(event.key.keysym.sym == SDLK_ESCAPE)
done = 1;
}
}
My problem is that the red background I'm trying to rendered is never rendered, only the OpenGL Triangle is rendered.
Thanks in advance to anyone who can help me. It's much appreciated.
There's one simple rule about OpenGL: It doesn't play well with others. What happens in your case is, that the double buffer swap (initiated by SDL_GL_SwapBuffers) will in some way replace everything in the window, not being rendered by OpenGL.
Just draw everything using OpenGL.
You fill the back buffer on one line with SDL_FillRect then you clear it on the next with glClear. Have you tried swapping the order of the operations?
Not that I disagree with the accepted answer; in general trying to mix software rendering methods with OpenGL is a recipe for confusion at best, but you might get lucky in this case.
As for rending textured quads, you should be able to work it out from NeHe lesson 6. People complain about NeHe but it's a reasonable guide for getting started. Just don't use it as an example of good coding or of efficient modern OpenGL usage. Start here and move to more complex stuff later.
If you're using C++, SFML library might be a better option (it has C bindings though, but haven't tried those). It plays nicely with OpenGL and has functions to cooperatively work alongside GL. As far as I understood it, SFML functions themselves use GL to render. Although, I do suggest that you do rendering only with GL calls as noted above.
your SDL_FillRect isn't show as red, because you call glClear with GL_COLOR_BUFFER_BIT set afterwards

Alpha/texturing issues in an OpenGL wrapper

I'm in the process of writing a wrapper for some OpenGL functions. The goal is to wrap the context used by the game Neverwinter Nights, in order to apply post-processing shader effects. After learning OpenGL (this is my first attempt to use it) and much playing with DLLs and redirection, I have a somewhat working system.
However, when the post-processing fullscreen quad is active, all texturing and transparency drawn by the game are lost. This shouldn't be possible, because all my functions take effect after the game has completely finished its own rendering.
The code does not use renderbuffers or framebuffers (both refused to compile on my system in any way, with or with GLEW or GLee, despite being supported and usable by other programs). Eventually, I put together this code to handle copying the texture from the buffer and rendering a fullscreen quad:
extern "C" SEND BOOL WINAPI hook_wglSwapLayerBuffers(HDC h, UINT v)
{
if ( frameCount > 250 )
{
frameCount++;
if ( frameCount == 750 ) frameCount = 0;
if ( nwshader->thisframe == NULL )
{
createTextures();
}
glBindTexture(GL_TEXTURE_2D, nwshader->thisframe);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 0, 0, nwshader->width, nwshader->height, 0);
glClearColor(0.0f, 0.5f, 0.0f, 0.5f);
glClear(GL_COLOR_BUFFER_BIT);
glEnable(GL_TEXTURE_2D);
glDisable(GL_DEPTH_TEST);
glBlendFunc(GL_ONE, GL_ZERO);
glEnable(GL_BLEND);
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glOrtho( 0, nwshader->width , nwshader->height , 0, -1, 1 );
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
glColor4f(1.0f, 1.0f, 1.0f, 1.0f);
glBegin(GL_POLYGON);
glTexCoord2f(0.0f, 1.0f);
glVertex2d(0, 0);
glTexCoord2f(0.0f, 0.0f);
glVertex2d(0, nwshader->height);
glTexCoord2f(1.0f, 0.0f);
glVertex2d(nwshader->width, nwshader->height);
glTexCoord2f(1.0f, 1.0f);
glVertex2d(nwshader->width, 0);
glEnd();
glMatrixMode( GL_PROJECTION );
glPopMatrix();
glMatrixMode( GL_MODELVIEW );
glPopMatrix();
glEnable(GL_DEPTH_TEST);
glDisable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);
} else {
frameCount++;
}
if ( h == grabbedDevice )
{
Log->logline("Swapping buffer on cached device.");
}
return wglSwapLayerBuffers(h,v);
}
This code functions almost functions perfectly and has no notable slow-down. However, when it is active (I added the frameCount condition to turn it on and off every ~5 seconds), all alpha and texturing are completely ignored by the game renderer. I'm not turning off any kind of blending or texturing before this function (the only OpenGL calls are to create the nwshader->thisframe texture).
I was able to catch a few screenshots of what's happening:
Broken A: http://i4.photobucket.com/albums/y145/peachykeen000/outside_brokenA.png
Broken B: http://i4.photobucket.com/albums/y145/peachykeen000/outside_brokenB.png
(note, in B, the smoke in the back is not broken, it is correctly transparent. So is the HUD.)
Broken Interior: http://i4.photobucket.com/albums/y145/peachykeen000/transparency_broken.png
Correct Interior (for comparison): http://i4.photobucket.com/albums/y145/peachykeen000/transparency_proper.png
The drawing of the quad also breaks menus, turning the whole thing into a black surface with a single white box. I suspect it is a problem with either depth or how the game is drawing certain objects, or a state that is not being reset properly. I've used GLintercept to dump a full log of all calls in a frame, and didn't see anything wrong (the call to wglSwapLayerBuffers is always last).
Being brand new to working with OpenGL, I really have no clue what's going wrong (or how to fix it) and nothing I've tried has helped. What am I missing?
I don't quite understand how your code is supposed to integrate with the Neverwinter Nights code. However...
It seems like you're most likely changing some setting that the existing code didn't expect to change.
Based on the description of the problem, I'd try removing the following line:
glDisable(GL_TEXTURE_2D);
That line disables textures, which certainly sounds like the problem you're seeing.