I'm having problems with drawing in OpenGL and I need to see exactly what values are being placed in the depth buffer. Can anyone tell me how to retrieve these values?
Thanks
Chris
Use glReadPixels with format = GL_DEPTH_COMPONENT, for example:
float depth;
glReadPixels(0, 0, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &depth);
Will get the depth of pixel (0, 0).
Related
Following this tutorial, I am performing shadow mapping on a 3D scene. Now I want
to manipulate the raw texel data of shadowMapTexture (see the excerpt below) before
applying this using ARB extensions
//Textures
GLuint shadowMapTexture;
...
...
**CopyTexSubImage2D** is used to copy the contents of the frame buffer into a
texture. First we bind the shadow map texture, then copy the viewport into the
texture. Since we have bound a **DEPTH_COMPONENT** texture, the data read will
automatically come from the depth buffer.
//Read the depth buffer into the shadow map texture
glBindTexture(GL_TEXTURE_2D, shadowMapTexture);
glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, shadowMapSize, shadowMapSize);
N.B. I am using OpenGL 2.1 only.
Tu can do it in 2 ways:
float* texels = ...;
glBindTexture(GL_TEXTURE_2D, shadowMapTexture);
glTexSubImage2D(GL_TEXTURE_2D, 0, x,y,w,h, GL_DEPTH_COMPONENT, GL_FLOAT, texels);
or
Attach your shadowMapTexture to (write) framebuffer and call:
float* pixels = ...;
glRasterPos2i(x,y)
glDrawPixels(w,h, GL_DEPTH_COMPONENT, GL_FLOAT, pixels);
Don't forget to disable depth_test first in above method.
I mapped texture coordinates like:
static float texCoord[] = {
0, 1,
1, 1,
1, 0,
0, 0
};
And by drawing it:
void Rectangle::Draw()
{
const float vertices[] = {
x, y,
x + width, y,
x, y - height,
x + width, y - height
};
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glColor3ub(255, 255, 255);
glVertexPointer(2, GL_FLOAT, 0, vertices);
glTexCoordPointer(2, GL_FLOAT, 2, texCoord);
if (IsTypeHorizontal()) glBindTexture(GL_TEXTURE_2D, texture_H);
else /* (IsTypeVertical())*/ glBindTexture(GL_TEXTURE_2D, texture_V);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
}
Texture drawn in vertical (height > width) is good but in horizontal (height < width), texture appears to be inverted. Even if I separate the texture coordinates by texCoord_H and texCoord_V, image drawn is still inverted?
What do I still need to know ? What is the problem here in my code?
PS. I upload texture in OpenGL using SOIL
Try:
static float texCoord[] = {
0, 1,
1, 1,
0, 0,
1, 0
};
Most OpenGL newbies get confused by the fact, that with the usual set of projection, transforms and viewport mapping parameters OpenGL considers image origin to be in the lower left and coordinates increating toward the right and up. This is in contrast to most computer graphics systems that assume the origin to be in the upper left and vertical dimension increasing downwards.
Probably that is your issue.
After many times of changing and swapping texture coordinates, I realize that I've wasted my time. The real problem is here
glTexCoordPointer(2, GL_FLOAT, 2, texCoord);
3rd parameter (w/c is the stride) 'causes the bug, it should be 0.
I have tried to read the front and back buffer with diffrent buffers. Back buffer before swap and front buffer after swap.
glReadBuffer(GL_BACK);
glReadPixels(0, 0, 1, 1, GL_BGRA, GL_UNSIGNED_BYTE, buffer_back);
SimpleGLContext::instance().swapBuffers();
glReadBuffer(GL_FRONT);
glReadPixels(0, 0, 1, 1, GL_BGRA, GL_UNSIGNED_BYTE, buffer_front);
Here buffer_back has the BGRA values correctly, but buffer_front is still giving null value. So please give me advice on that. Thanks in advance.
I am still trying to read pixels from fragment shader and I have some questions.
I know that gl_FragColor returns with vec4 meaning RGBA, 4 channels.
After that, I am using glReadPixels to read FBO and write it in data
GLubyte *pixels = new GLubyte[640*480*4];
glReadPixels(0, 0, 640,480, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
This works fine but it really has speed issue. Instead of this, I want to just read RGB so ignore alpha channels. I tried:
GLubyte *pixels = new GLubyte[640*480*3];
glReadPixels(0, 0, 640,480, GL_RGB, GL_UNSIGNED_BYTE, pixels);
instead and this didn't work though. I guess it's because gl_FragColor returns 4 channels and maybe I should do something before this? Actually, since my returned image (gl_FragColor) is grayscale, I did something like
float gray = 0.5 //or some other values
gl_FragColor = vec4(gray,gray,gray,1.0);
So is there any efficient way to use glReadPixels instead of using the first 4 channels method? Any suggestion? By the way, this is on opengl es 2.0 code.
The OpenGL ES 2.0 spec says that there are two valid forms of the call:
glReadPixels(x, y, w, h, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
or
GLint format, type;
glGetIntegerv(IMPLEMENTATION_COLOR_READ_FORMAT, &format);
glGetIntegerv(IMPLEMENTATION_COLOR_READ_TYPE, &type);
glReadPixels(x, y, w, h, format, type, pixels);
The possible combinations for format and type are (pic taken from the spec):
And the implementation will decide which is available to you.
However, it's likely that if you create a rendering surface of an appropriate format, then that will be the format you'll obtain here. See if you can modify your code to obtain a RGB framebuffer (i.e. with 0 bits for alpha channel). Or perhaps you might want to create an offscreen framebuffer object for that purpose?
I got wrong texture values from glCopyTexImage2D(). I attached depth texture to FBO, and got its value in rendering pass. I expected a result like below : (x : background, y : correct pixel)
---yyyyyyyyyyyyyyyyyyyy-----
-----------yyyyyyyyyyy------
yyyyyy----------y-----------
yyyyyyyyyy-------y----y-yyyy
yyyyyyyyyyyyyyyyyyyyyy------
but, my result is like this :
---yyyyyyyyyyyyyyyyyyyy-----
-----------yyyyyyyyyyy------
yyyyyy----------y-----------
----------------------------
----------------------------
From the half of the texture to the end, background pixels are only presented. of course, from the top-left to the half I got a correct result.
a texture creation code is below :
glGenTexture(1, &tex);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, tex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_TEXTURE, w, h, 0, GL_DEPTH_TEXTURE, GL_FLOAT, 0);
and in rendering code,
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, tex);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_TEXTURE, 0, 0, w, h, 0);
Is there any mistake in glCopyTexImage2D? w and h is fixed and I don't know why..
First, I have no idea how what you've posted compiles as there is no such thing as GL_DEPTH_TEXTURE. There is GL_DEPTH_COMPONENT however.
Second, you should always use sized depth formats. So instead of GL_DEPTH_COMPONENT, use GL_DEPTH_COMPONENT24 to get a 24-bit depth buffer. Note that the pixel transfer format (the argument third from the end) should still be GL_DEPTH_COMPONENT.
Third, you should use glCopyTexSubImage2D, so that you're not reallocating the texture memory all the time.