I want to read the pixels from the back buffer. But all i get so far is a black screen (the clear color).
The thing is, is that i don't need a glut window to draw to. Once i have the pixel information, then i pass that to another program which will draw the image for me.
My init function looks like this:
// No main function, so no real argv argc
char fakeParam[] = "nothing";
char *fakeargv[] = { fakeParam, NULL };
int fakeargc = 1;
glutInit( &fakeargc, fakeargv );
GLenum err = glewInit();
if (GLEW_OK != err)
{
MessageBoxA(NULL, "Failed to initialize OpenGL", "ERROR", NULL);
}
else
{
glEnable(GL_TEXTURE_2D);
glEnable(GL_DEPTH_TEST);
// Not sure if this call is needed since i don't use a glut window to render too..
glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE | GLUT_DEPTH);
}
Then in my render function i do:
void DisplayFunc(void)
{
/* Clear the buffer, clear the matrix */
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
// TEAPOT
glTranslatef(0.0f, 0.0f, -5.0f); // Translate back 5 units
glRotatef(rotation_degree, 1.0f, 1.0f, 0.0f); // Rotate according to our rotation_degree value
glFrontFace(GL_CW);
glutSolidTeapot(1.0f); // Render a teapot
glFrontFace(GL_CCW);
glReadBuffer(GL_BACK);
glReadPixels(0, 0, (GLsizei)1024, (GLsizei)768, GL_RGB, GL_UNSIGNED_BYTE, pixels);
int r = glGetError();
}
This is basically all i do. At the end of the last function is where i'm trying to read all the pixels. But the output is just a black image. glGetError() doesn't give any errors.
Anyone any idea what the problem could be...???
I want to read the pixels from the back buffer. But all i get so far is a black screen (the clear color).
The thing is, is that i don't need a glut window to draw to. Once i have the pixel information, then i pass that to another program which will draw the image for me.
It doesn't work like that. The backbuffer is not some kind of off-screen rendering area, it's part of on-screen window. Actually the whole doublebuffer concept only makes sense for on-screen windows. Each pixel of a double buffered window has two color values, but only one depth, stencil, etc.; upon buffer swap just the pointer to the back and front pixel plane are exchanged. But because we're still talking about a window, when rasterizing all fragments go through the pixel ownership test, i.e. are checked for, if they are actually visible on screen. If not, they're not rendered.
But your problems go further: You don't even create a window, so you don't have an OpenGL context at all. Your calling of OpenGL commands has no effect whatsoever. glReadPixels doesn't return you anything, because there's nothing to read from.
The bad news is, that the only way to get a context with GLUT is, by creating a window. The good news is, you don't have to use GLUT. People, why don't you get this: GLUT is not part of OpenGL, it's a quick and dirty framework for writing small tutorials, nothing more.
What you want is either:
not a window, but a PBuffer, i.e. a off screen drawable, that doesn't got through pixel ownership tests.
or
A hidden window with a OpenGL context created on it, and in this context a Frame Buffer Object for an off-screen rendering target.
Try calling glFlush before glReadPixels.
Also, where do you set the size of your window?
Related
My task is to render a set of 50 RGB frames using openGL's glut library.
I tried: In 3D cube rotation, i have a set of vertices using which i render it to the window. However, in case of rendering the RGB frames what should be done? Below is the code using which i render my 3d cube:
#include <glut.h>
GLfloat vertices[24]={-1.0,-1.0,-1.0,1.0,-1.0,-1.0,1.0,1.0,-1.0,-1.0,1.0,-1.0,-1.0,-1.0,1.0,1.0,-1.0,1.0,1.0,1.0,1.0,-1.0,1.0,1.0};
GLfloat colors[24]={-1.0,-1.0,-1.0,1.0,-1.0,-1.0,1.0,1.0,-1.0,-1.0,1.0,-1.0,-1.0,-1.0,1.0,1.0,-1.0,1.0,1.0,1.0,1.0,-1.0,1.0,1.0};
GLubyte cubeIndices[24]={0,3,2,1,2,3,7,6,0,4,7,3,1,2,6,5,4,5,6,7,0,1,5,4};
static GLfloat theta[3]={0,0,0};
static GLint axis=2;
void display()
{
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glRotatef(theta[0],1.0,0.0,0.0);
glRotatef(theta[1],0.0,1.0,0.0);
glRotatef(theta[2],0.0,0.0,1.0);
glDrawElements(GL_QUADS,24,GL_UNSIGNED_BYTE,cubeIndices);
glutSwapBuffers();
glFlush();
}
void spinCude()
{
theta[axis]+=2.0;
if(theta[axis]>360.0)
theta[axis]-=360.0;
display();
}
void init()
{
glMatrixMode(GL_PROJECTION);
glOrtho(-2.0,2.0,-2.0,2.0,-10.0,10.0);
glMatrixMode(GL_MODELVIEW);
}
void mouse(int btn,int state, int x,int y)
{
if(btn==GLUT_LEFT_BUTTON&& state==GLUT_DOWN) axis=0;
if(btn==GLUT_MIDDLE_BUTTON&& state==GLUT_DOWN) axis=1;
if(btn==GLUT_RIGHT_BUTTON&& state==GLUT_DOWN) axis=2;
}
void main(int argc, char **argv)
{
glutInit(&argc,argv);
glutInitDisplayMode(GLUT_DOUBLE|GLUT_RGB|GLUT_DEPTH);
glutInitWindowSize(500,500);
glutCreateWindow("Simple YUV Player");
init();
glutDisplayFunc(display);
glutIdleFunc(spinCude);
glutMouseFunc(mouse);
glEnable(GL_DEPTH_TEST);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3,GL_FLOAT,0,vertices);
//glEnableClientState(GL_COLOR_ARRAY);
glVertexPointer(3,GL_FLOAT,0,colors);
glutMainLoop();
}
Can anyone suggest me some example or tutorial such that i can modify above code to display RGB frames.
Once you have your RGB-Frame as raw-data in memory things are pretty straight-forward. Create a texture using glGenTextures, bind it using glBindTexture and upload the data via glTexImage2D or glTexSubImage2D. Then render a fullscreen quad or whatever you like with that texture. The benefit of that is that you could render multiple 'virtual' TVs in your scene just by rendering multiple quads with that same texture, imagine a TV-Store where the same video runs on dozen of TVs.
glDrawPixels might also work but it is much less versatile.
I don't know if uploading via texture is the way to go (hardware accelerated movie playback programs like VLC are most likely doing something far more advanced), but it should be a good start.
As Marius already suggested, implement texture mapping first. It's rather straigth forward any texture mapping tutorial will do.
Rendering frames are not the best with OpenGL you should try to avoid them as much as you can since it may involve a client -> host memory copy which is really costy ( takes too much time ) or simply it just takes up too much memory. Anyways if you really have to do it just generate as much textures as you need with glGenTextures load them up with the textures by glTexImage2D and then flip over the frames with a simple loop in each frame.
P.S. Judging by your application's name "YUV Player" you may also need to convert the input data since OpenGL mostly uses RGB not YUV.
I've been trying to incorporate shaders and OpenGl into a wxWidgets program. I've used the links below:
http://nehe.gamedev.net/article/glsl_an_introduction/25007/
http://www.lighthouse3d.com/tutorials/glsl-tutorial/hello-world-in-glsl/
Now I've been trying in a test program to use the shaders provided by the lighthouse3d tutorial and recreate the output, (a blue teapot spinning slowly on a white background). I can't seem to get anything to draw though and all I can see is a black screen. My code so far is below, (I'm going to ignore most of the shaders intially as I'm 99% sure they're fine):
void BasicGLPane::render( wxPaintEvent& evt )
{
//wxGLCanvas::SetCurrent(*m_context);
wxPaintDC(this);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//prepare2DViewport(0,0,getWidth()/2, getHeight());
glLoadIdentity();
gluLookAt(0.0,0.0,5.0,
0.0,0.0,-1.0,
0.0f,1.0f,0.0f);
glLightfv(GL_LIGHT0, GL_POSITION, lpos);
//glRotatef(a,0,1,1);
glutSolidTeapot(1);
glFlush();
//a+=0.1;
SwapBuffers();
}
void BasicGLPane::InitializeGLEW()
{
//prepare2DViewport(0,0,getWidth(), getHeight());
// The current canvas has to be set before GLEW can be initialized.
wxGLCanvas::SetCurrent(*m_context);
GLenum err = glewInit();
// If Glew doesn't initialize correctly.
if(GLEW_OK != err)
{
std::cerr << "Error:" << glewGetString(err) << std::endl;
const GLubyte* String = glewGetErrorString(err);
wxMessageBox("GLEW is not initialized");
}
BasicGLPane::BasicGLPane(wxFrame* parent, int* args) :
wxGLCanvas(parent, wxID_ANY, args, wxDefaultPosition, wxDefaultSize, wxFULL_REPAINT_ON_RESIZE)
{
m_context = new wxGLContext(this);
// To avoid flashing on MSW
SetBackgroundStyle(wxBG_STYLE_CUSTOM);
}
I've had thoughts as to why I'm not getting any output. One thought I'm having is something to do with the m_context. I'm having to set the current context for WxWidgets before I can run GLEW. There's also a number of properties that in the tutorial are initialized and I'm not using these functions in my wxWidgets version and I'm wondering if I should. These are:
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA);
glutInitWindowPosition(100,100);
glutInitWindowSize(320,320);
glutCreateWindow("MM 2004-05");
glutDisplayFunc(renderScene);
glutIdleFunc(renderScene);
glutReshapeFunc(changeSize);
glutKeyboardFunc(processNormalKeys);
glEnable(GL_DEPTH_TEST);
glClearColor(1.0,1.0,1.0,1.0);
glEnable(GL_CULL_FACE);
But I'm quite keen to avoid using glut and have managed to avoid it up until now. The only reason I've previously added it is to try and replicate the tutorial's behaviour.
Edit:
I'm going to add a bit more as I have noticed one or two bits of odd behaviour. If I call this function in my draw:
void BasicGLPane::prepare2DViewport(int topleft_x, int topleft_y, int bottomrigth_x, int bottomrigth_y)
{
glClearColor(1.0f, 1.0f, 1.0f, 1.0f); // Black Background
glEnable(GL_TEXTURE_2D); // textures
glEnable(GL_COLOR_MATERIAL);
glEnable(GL_BLEND);
glDisable(GL_DEPTH_TEST);
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
glViewport(topleft_x, topleft_y, bottomrigth_x-topleft_x, bottomrigth_y-topleft_y);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(topleft_x, bottomrigth_x, bottomrigth_y, topleft_y);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
I can get the background to change colour when I change the window size. I should also mention, it's NOT refreshing every frame, It only draws one frame and then won't call the render function again until I change the window size.
Your code looks good so far. One thing you find in a lot of tutorials, but is bad practice is, that there's apparently some initialization happening. This is not the case. OpenGL is not initialized, it's a state machine and you're supposed to set state when you need it. The lines
glEnable(GL_DEPTH_TEST);
glClearColor(1.0,1.0,1.0,1.0);
glEnable(GL_CULL_FACE);
Are perfectly happy in the drawing function. You also need to setup a projection. In tutorials you often find them to be set in the window resize handler. Please don't fall into this bad habit. Projection and viewport are drawing state, so set them in the drawing function.
If you're using OpenGL-3 (core profile) or later you must supply at least a vertex and a fragment shader. In the older versions each shader stage is optional and there are builtin variables to provide a common grounds for communication between fixed function and programmable pipeline. However I strongly advise against mixed operation. Always use shaders and use both a vertex and a fragment shader. In the long term they make things soooo much easier.
Turns out I didn't need the gluLookAt in my render.
When I render my text using TTF_RenderUTF8_Blended I obtain a solid rectangle on the screen. The color depends on the one I choose, in my case the rectangle is red.
My question
What am I missing? It seems like I'm not getting the proper Alpha values from the surface generated with SDL_DisplayFormatAlpha(TTF_RenderUTF8_Blended( ... )), or am I? Does anyone recognize or know the problem?
Additionnal informations
If I use TTF_RenderUTF8_Solid or TTF_RenderUTF8_Shaded the text is drawn properly, but not blended of course.
I am also drawing other textures on the screen, so I draw the text last to ensure the blending will take into account the current surface.
Edit:SDL_Color g_textColor = {255, 0, 0, 0}; <-- I tried with and without the alpha value, but I get the same result.
I have tried to summarize the code without removing too much details. Variables prefixed with "g_" are global.
Init() function
// This function creates the required texture.
bool Init()
{
// ...
g_pFont = TTF_OpenFont("../arial.ttf", 12);
if(g_pFont == NULL)
return false;
// Write text to surface
g_pText = SDL_DisplayFormatAlpha(TTF_RenderUTF8_Blended(g_pFont, "My first Text!", g_textColor)); //< Doesn't work
// Note that Solid and Shaded Does work properly if I uncomment them.
//g_pText = SDL_DisplayFormatAlpha(TTF_RenderUTF8_Solid(g_pFont, "My first Text!", g_textColor));
//g_pText = SDL_DisplayFormatAlpha(TTF_RenderUTF8_Shaded(g_pFont, "My first Text!", g_textColor, g_bgColor));
if(g_pText == NULL)
return false;
// Prepare the texture for the font
GLenum textFormat;
if(g_pText->format->BytesPerPixel == 4)
{
// alpha
if(g_pText->format->Rmask == 0x000000ff)
textFormat = GL_RGBA;
else
textFormat = GL_BGRA_EXT;
}
// Create the font's texture
glGenTextures(1, &g_FontTextureId);
glBindTexture(GL_TEXTURE_2D, g_FontTextureId);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, g_pText->format->BytesPerPixel, g_pText->w, g_pText->h, 0, textFormat, GL_UNSIGNED_BYTE, g_pText->pixels);
// ...
}
DrawText() function
// this function is called each frame
void DrawText()
{
SDL_Rect sourceRect;
sourceRect.x = 0;
sourceRect.y = 0;
sourceRect.h = 10;
sourceRect.w = 173;
// DestRect is null so the rect is drawn at 0,0
SDL_BlitSurface(g_pText, &sourceRect, g_pSurfaceDisplay, NULL);
glBindTexture(GL_TEXTURE_2D, g_FontTextureId);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBegin( GL_QUADS );
glTexCoord2f(0.0f, 0.0f);
glVertex2f(0.0f, 0.0f);
glTexCoord2f(0.0f, 1.0f);
glVertex2f(0.0f, 10.0f);
glTexCoord2f(1.0f, 1.0f);
glVertex2f(173.0f, 10.0f);
glTexCoord2f(1.0f, 0.0f);
glVertex2f(173.0f, 0.0f);
glEnd();
glDisable(GL_BLEND);
glDisable(GL_TEXTURE_2D);
}
You've made a fairly common mistake. It's on the OpenGL end of things.
When you render the textured quad in DrawText(), you enable OpenGL's blending capability, but you never specify the blending function (i.e. how it should be blended)!
You need this code to enable regular alpha-blending in OpenGL:
glEnable( GL_BLEND );
glBlendFunc( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA );
This info used to be on the OpenGL website, but I can't find it now.
That should stop it from coming out solid red. The reasons the others worked is because they're not alpha-blended, they're actually just red-on-black images with no alpha, so the blending function doesn't matter. But the blended one only contains red color, with an alpha channel to make it less-red.
I notice a few other small problems in your program though.
In the DrawText() function, you are blitting the surface using SDL and rendering with OpenGL. You should not use regular SDL blitting when using OpenGL; it doesn't work. So this line should not be there:
SDL_BlitSurface(g_pText, &sourceRect, g_pSurfaceDisplay, NULL);
Also, this line leaks memory:
g_pText = SDL_DisplayFormatAlpha( TTF_RenderUTF8_Blended(...) );
TTF_RenderUTF8_Blended() returns a pointer to SDL_Surface, which must be freed with SDL_FreeSurface(). Since you're passing it into SDL_DisplayFormatAlpha(), you lose track of it, and it never gets freed (hence the memory leak).
The good news is that you don't need SDL_DisplayFormatAlpha here because TTF_RenderUTF8_Blended returns a 32-bit surface with an alpha-channel anyway! So you can rewrite this line as:
g_pText = TTF_RenderUTF8_Blended(g_pFont, "My first Text!", g_textColor);
I need to render a sphere to a texture (done using a Framebuffer Object (FBO)), and then alpha blend that texture with the back buffer. So far I'm not doing any processing with the texture except clearing it at the beginning of every frame.
I should say that my scene consists of nothing but a planet in empty space, the sphere should appear next to or around the planet (kind of like a moon for now). When I render the sphere directly to the back buffer, it displays correctly; but when I do the intermediary step of rendering it to a texture and then blending that texture with the back buffer, the sphere only shows up when it is in front of the planet, the part that isn't in front is just "cut off":
I render the sphere using glutSolidSphere to a RGBA8 fullscreen texture that's bound to an FBO, making sure that every sphere pixel receives an alpha value of 1.0. I then pass the texture to a fragment shader program, and use this code to render a fullscreen quad - with the texture mapped onto it - to the backbuffer while alpha blending:
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
glDisable(GL_DEPTH_TEST);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glBegin(GL_QUADS);
glTexCoord2i(0, 1);
glVertex3i(-1, 1, -1); // TOP LEFT
glTexCoord2i(0, 0);
glVertex3i(-1, -1, -1); // BOTTOM LEFT
glTexCoord2i(1, 0);
glVertex3i( 1, -1, -1); // BOTTOM RIGHT
glTexCoord2i(1, 1);
glVertex3i( 1, 1, -1); // TOP RIGHT
glEnd();
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
glEnable(GL_DEPTH_TEST);
glDisable(GL_BLEND);
This is the shader code (taken from an FX file written in Cg):
sampler2D BlitSamp = sampler_state
{
MinFilter = LINEAR;
MagFilter = LINEAR;
MipFilter = LINEAR;
AddressU = Clamp;
AddressV = Clamp;
};
float4 blendPS(float2 texcoords : TEXCOORD0) : COLOR
{
float4 outColor = tex2D(BlitSamp, texcoords);
return outColor;
}
I don't even know whether this is a problem with the depth buffer or with alpha blending, I've tried a lot of combinations of enabling and disabling depth testing (with a depth buffer attached to the FBO) and alpha blending.
EDIT: I tried just rendering a blank fullscreen quad straight to the back buffer and even that was cropped around the planet's edges. For some reason, enabling depth testing for rendering the quad (that is, removing the lines glDisable(GL_DEPTH_TEST) and glEnable(GL_DEPTH_TEST) in the code above) got rid of the problem, but now everything but the planet and the sphere appears white:
I made sure (and could confirm) that the alpha channel of the texture is 0 at every pixel but the sphere's, so I don't understand where the whiteness could be introduced. (Would also still be interested in an explanation why enabling depth testing has this effect.)
I see two possible sources of error here:
1. Rendering to the FBO
If the missing pixels are not even present in the FBO after rendering, there must be some mechanism which discarded the corresponding fragments. The OpenGL pipeline includes four different types of fragment tests which can lead to fragments being discarded:
Scissor Test: Unlikely to be the cause, as the scissor test only affects a rectangular portion of the screen.
Alpha Test: Equally unlikely, as your fragments should all have the same alpha value.
Stencil Test: Also unlikely, unless you use stencil operations when drawing the background planet and copy over the stencil buffer from the back buffer to the FBO.
Depth Test: Same as for stencil test.
So there's a good chance that rendering into FBO is not the issue here. But just to be absolutely sure, you should read back your color attachment texture and dump it into a file for inspection. You can use the following function for that:
void TextureToFile(GLuint texture, const char* filename) {
glBindTexture(GL_TEXTURE_2D, texture);
GLint width, height;
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, &width);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_HEIGHT, &height);
std::vector<GLubyte> pixels(3 * width * height);
glGetTexImage(GL_TEXTURE_2D, 0, GL_RGB, GL_UNSIGNED_BYTE, &pixels[0]);
std::ofstream out(filename, std::ios::out | std::ios::binary);
out << "P6\n"
<< width << '\n'
<< height << '\n'
<< 255 << '\n';
out.write(reinterpret_cast<const char*>(&pixels[0]), pixels.size());
}
The resulting file is a portable pixmap (.ppm). Be sure to unbind the FBO before reading back the texture.
2. Texture mapping
Assuming rendering into the FBO works as expected, the only other source of error is blending the texture over the previously rendered scene. There are two scenarios:
a) Fragments get discarded
The possible reasons for fragments to get discarded are the same as in 1.:
Scissor Test: Nope, affects rectangular areas only.
Alpha Test: Probably not, the texels covered sphere should all have the same alpha value.
Stencil Test: Might be the cause if you use stencil operations/stencil testing when drawing the background planet and the old stencil state is still active.
Depth Test: Might be the cause, but as you already disable it, it really shouldn't have any effect.
So you should make sure that all of these tests are disabled, especially the stencil test.
b) Wrong results from blending
Assuming all fragments reach the back buffer, blending is the only thing which could still cause the wrong result. With your blending function (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) the values in the back buffer are irrelevant for blending, and we assume that the alpha values in the texture are correct. So I see no reason for why blending should be the root cause here.
Conclusion
In conclusion, the only sensible cause for the observed result seems to be stencil testing. If it's not, I'm out of options :)
I solved it or at least came up with a work around.
First off, the whiteness stems from the fact that glClearColor had been set to glClearColor(1.0f, 1.0f, 1.0f, 1000.0f), so everything but the planet wasn't even written to in the end. I now copy the contents of the back buffer (which is the planet, the atmosphere, and the space around it) to the texture before rendering the sphere, and I render the atmosphere and space before that copy/blit operation, so they are included in it. Previously, everything but the planet itself was rendered after my quad, which - when using depth testing - apparently placed everything behind the quad, making it invisible.
The reference implementation of the effect I'm trying to achieve has always used this kind of blit operation in its code but I didn't think it was necessary for the effect. Now I feel like there might be no other way...
How exactly can I do a Z buffer prepass with openGL.
I'v tried this:
glcolormask(0,0,0,0); //disable color buffer
//draw scene
glcolormask(1,1,1,1); //reenable color buffer
//draw scene
//flip buffers
But it doesn't work. after doing this I do not see anything. What is the better way to do this?
Thanks
// clear everything
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// z-prepass
glEnable(GL_DEPTH_TEST); // We want depth test !
glDepthFunc(GL_LESS); // We want to get the nearest pixels
glcolormask(0,0,0,0); // Disable color, it's useless, we only want depth.
glDepthMask(GL_TRUE); // Ask z writing
draw()
// real render
glEnable(GL_DEPTH_TEST); // We still want depth test
glDepthFunc(GL_LEQUAL); // EQUAL should work, too. (Only draw pixels if they are the closest ones)
glcolormask(1,1,1,1); // We want color this time
glDepthMask(GL_FALSE); // Writing the z component is useless now, we already have it
draw();
You're doing the right thing with glColorMask.
However, if you're not seeing anything, it's likely because you're using the wrong depth test function.
You need GL_LEQUAL, not GL_LESS (which happens to be the default).
glDepthFunc(GL_LEQUAL);
If i get you right, you are trying to disable the depth-test performed by OpenGL to determine culling. You are using color functions here, which does not make sense to me. I think you are trying to do the following:
glDisable(GL_DEPTH_TEST); // disable z-buffer
// draw scene
glEnable(GL_DEPTH_TEST); // enable z-buffer
// draw scene
// flip buffers
Do not forget to clear the depth buffer at the beginning of each pass.