How to create a 24 bit depth buffer openGL - opengl

I am learning about how to use framebuffers in opengl. From what I understand so far, if I want to render an image to a framebuffer I would first create a texture and then attach it to a framebuffers colour attachment. The code might look something like this:
glGenTextures(1, &handle);
glBindTexture(GL_TEXTURE_2D, handle);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glBindTexture(GL_TEXTURE_2D, 0);
This code generates a texture handle, binds it, and generates an empty texture for the target GL_TEXTURE_2D. Because the format is specified as GL_RGB and the data type is GL_UNSIGNED_BYTE, then after rendering to this texture the data would look like:
R G B R G B R G B R G B R G B R ...
23 25 40 1 4 67 255 255 255 0 0 1 3 5 55 72 ...
So there are 3 channels each composed of the single byte data. Now I think I can attach this texture as a colour attachment of a framebuffer like this:
glGenFramebuffers(1, &fbohandle);
glBindFramebuffer(GL_FRAMEBUFFER, fbohandle);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, handle, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
Hopefully my understanding of this is correct. Now consider trying to render depth values to a texture instead. This might look like:
glGenTextures(1, &handle);
glBindTexture(GL_TEXTURE_2D, handle);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, width, height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glBindTexture(GL_TEXTURE_2D, 0);
glGenFramebuffers(1, &fbohandle);
glBindFramebuffer(GL_FRAMEBUFFER, fbohandle);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, handle, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
In this case I generate an empty texture this time able to store one float value for the depth (where the texture has a single channel per pixel). So each value in our depth texture has a 32 bit precision. I have often seen referenced however that the depth component is stored a 16 or 24 bits. For example in theUnity engine documentation on RenderTextures it is stated that "On OpenGL it is the native "depth component" format (usually 24 or 16 bits), on Direct3D9 it is the 32 bit floating point ("R32F") format."
Indeed looking at the Khronos wiki I see that there exists GL_DEPTH_COMPONENT16, GL_DEPTH_COMPONENT24, GL_DEPTH_COMPONENT32 and GL_DEPTH_COMPONENT32F. If I were to use GL_DEPTH_COMPONENT24 for the format of my texture (to create a depth texture with 24 bits of precision), what should I use as my data type? As far as I know there isn't a 24 bit float. Is my understanding of framebuffers and textures (as described here) correct in terms of how the data is stored? How would one create a 24 (or 16) bit precision depth buffer?

If I were to use GL_DEPTH_COMPONENT24 for the format of my texture (to create a depth texture with 24 bits of precision), what should I use as my data type?
GL_UNSIGNED_INT. But according to this answer, it’s not important unless you want to access depth buffer data from CPU: https://stackoverflow.com/a/19307871/126995
How would one create a 24 (or 16) bit precision depth buffer?
16 bit depth buffers contain integers just like 24 bit buffers. The correct data type for them is GL_UNSIGNED_SHORT.

Related

OpenGL: what is the BEST texture format for depth infomation?

I am the new one here, and I have a question about the texture format in OpenGL for depth infomation, there is part of my code:
glGenTextures(1,&tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE16UI_EXT, width, height, 0, GL_LUMINANCE_INTEGER_EXT, GL_UNSIGNED_SHORT, data);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
The question is: in my Intel HD (graphics 5500), there will have problem in "glTexImage2D" when I want to deal with the depth camera (in unsigned short), but that is okey for NV (GeForce 940M). (The GL error is 0x0502)
Is the internal format "GL_LUMINANCE16UI_EXT" not suitable for Intel HD? or do I miss something or there have better format can used?
BTW, I had tried the internal format "GL_DEPTH_COMPONENT16" with "GL_DEPTH_COMPONENT" to make the error not happened, but some other problem will happen in the following code after the code above:
glBindTexture(GL_TEXTURE_2D, tex);
frameBuffer.Bind();
glPushAttrib(GL_VIEWPORT_BIT);
glViewport(0, 0, renderBuffer.width, renderBuffer.height);
glClearColor(0, 0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
GlSlProgram Bind;
.
glGetUniformLocation(...);
glUniform3f(...);
.
glDrawArrays(GL_POINTS, 0, 1);
frameBuffer.Unbind();
GlSlProgram Unbind;
glPopAttrib();
glFinish();
The gl error will happen in "glClear" and "glDrawArrays" with 0x0506, when this kind of format is used. I don't know how to fix that...
GL_LUMINANCE16UI is no depth buffer format and will most likely not work. A list of available depth buffer formats is here.
also, you probably shouldn't bind the texture itself but instead attach it to the framebuffer with glFrameBufferTexture2D and GL_DEPTH_ATTACHMENT.

OpenGL ES Depth Buffer rendering (iOS simulator VS real device)

When rendering the depth buffer with the iOS simulator Getting the true z value from the depth buffer , all is fine.
But the rendering on real device gives bad result: the depth is rendered with only few values (there is no fade) like when you display an image into a 256 values color range.
Here is the code for the fbo generation:
glGenFramebuffers(1, &sceneFBO);
glGenTextures(2, textures_scene);
glBindTexture(GL_TEXTURE_2D, textures_scene[0]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, widthResolution, heightResolution, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glBindTexture(GL_TEXTURE_2D, textures_scene[1]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, widthResolution, heightResolution, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, NULL);
//TODO: GL_DEPTH_COMPONENT cannot be GL_UNSIGNED_BYTE ?
glBindFramebuffer(GL_FRAMEBUFFER, sceneFBO);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textures_scene[0], 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, textures_scene[1], 0);
It seems that the only working value for GL_DEPTH_ATTACHMENT fbo is GL_UNSIGNED_INT. But it is not enough to render the depth...
PS: I don't want to display again the scene into a GL_COLOR_ATTACHMENT0 with the z-values in order to have a correct depth (for performance reasons) nor use OpenGL ES3.0 or the new metal (iPhone4s support).
Any idea?
There is no depth buffer support in ES 1 or 2. See the answer to this question : glReadPixels doesn't read depth buffer values on iOS
Seems, that it's not possible to use GL_UNSIGNED_BYTE as type parameter together with GL_DEPTH_COMPONENT as internalformat, only GL_UNSIGNED_SHORT and GL_UNSIGNED_INT are allowed according to the GL_OES_depth_texture extension spec: https://www.khronos.org/registry/gles/extensions/OES/OES_depth_texture.txt
I finally put the z value in the alpha part of the scene (I don't use transparency for this fbo).
color.w=(gl_postion.z-near)/(far-near)
A little mcgyver but it does the job.

Creating and reading 1D textures in OpenGL 4.x

I have problems to use 1D textures in OpenGL 4.x.
I create my 1d texture this way (BTW: I removed my error checks to make the code more clear and shorter - usually after each gl call a BLUE_ASSERTEx(glGetError() == GL_NO_ERROR, "glGetError failed."); follows):
glGenTextures(1, &textureId_);
// bind texture
glBindTexture(GL_TEXTURE_1D, textureId_);
// tells OpenGL how the data that is going to be uploaded is aligned
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
BLUE_ASSERTEx(description.data, "Invalid data provided");
glTexImage1D(
GL_TEXTURE_1D, // Specifies the target texture. Must be GL_TEXTURE_1D or GL_PROXY_TEXTURE_1D.
0, // Specifies the level-of-detail number. Level 0 is the base image level. Level n is the nth mipmap reduction image.
GL_RGBA32F,
description.width,
0, // border: This value must be 0.
GL_RGBA,
GL_FLOAT,
description.data);
BLUE_ASSERTEx(glGetError() == GL_NO_ERROR, "glGetError failed.");
// texture sampling/filtering operation.
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri (GL_TEXTURE_1D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri (GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri (GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glBindTexture(GL_TEXTURE_1D, 0);
After the creation I try to read the pixel data of the created texture this way:
const int width = width_;
const int height = 1;
// Allocate memory for depth buffer screenshot
float* pixels = new float[width*height*sizeof(buw::vector4f)];
// bind texture
glBindTexture(GL_TEXTURE_1D, textureId_);
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glReadPixels(0, 0, width, height, GL_RGBA, GL_FLOAT, pixels);
glBindTexture(GL_TEXTURE_1D, 0);
buw::Image_4f::Ptr img(new buw::Image_4f(width, height, pixels));
buw::storeImageAsFile(filename.toCString(), img.get());
delete pixels;
But the returned pixel data is different to the input pixel data (input: color ramp, ouptut: black image)
Any ideas how to solve the issue? Maybe I am using a wrong API call.
Replacing glReadPixels by glGetTexImage does fix the issue.

How is glDrawBuffers associated to drawing to a depth texture

You can specify what buffers to draw to using
glDrawBuffer()
example
GLenum buffers[] = { GL_COLOR_ATTACHMENT0 };
glDrawBuffers( 1, buffers );
ok so that makes sense.
how about the depth texture though?
so we have here our creation of a frame buffer
glGenFramebuffers(1, &m_uifboHandle);
// Initialize FBO
glBindFramebuffer(GL_FRAMEBUFFER, m_uifboHandle);
unsigned int m_uiTextureHandle[2];
glGenTextures( 2, m_uiTextureHandle );
glBindTexture( GL_TEXTURE_2D, m_uiTextureHandle[x]);
// Reserve space for our 2D render target
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, iInternalFormat, uiWidth, uiHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_uiTextureHandle[0], 0);
//now the depth
glBindTexture(GL_TEXTURE_2D, m_uiTextureHandle[1]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, uiWidth, uiHeight, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_uiTextureHandle[1], 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
so if these steps basically:
1) make an frame to manipulate to render to
2) gives us a location for attachment 0 to write to
3) creates our depth attachment
now how can i write to it? i seen examples where rendering automatically renders to it but i want to understand how binding to write to it works.
glDrawBuffers is not related to the depth buffer at all. It allows selecting the color buffers to render into, and there can be more than one color buffer per framebuffer. But there can be at most one depth buffer. So either there is no depth buffer at all, or there is one and the depth value is written to the depth buffer (assuimg depth test is enabled and the depth mask is set to GL_TRUE).

Using the depth buffer for floating point rendertargets in OpenGL 2.1

I would like to use the depth buffer to store the depth values of particles in eye space in a 2D texture by using OpenGL 2.1 / GLSL 1.2.
So far I found a way to use the colorbuffer
// create texture
glGenTextures(1, &g_hDepthTexture);
glBindTexture(GL_TEXTURE_2D, g_hDepthTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F_ARB, g_windowWidth, g_windowHeight, 0, GL_RGBA, GL_FLOAT, 0);
// create framebuffer
glGenFramebuffersEXT(1, &g_hFBO);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, g_hFBO);glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, g_hDepthTexture, 0);
However, I don't need the BGA components. Hence, I have tried to use the depth buffer, but the following code clamps each value in the texture to 0...1
// create texture
glGenTextures(1, &g_hDepthTexture);
glBindTexture(GL_TEXTURE_2D, g_hDepthTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, g_windowWidth, g_windowHeight, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); // create framebuffer
glGenFramebuffersEXT(1, &g_hFBO);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, g_hFBO);
glDrawBuffer(GL_NONE);
glReadBuffer(GL_NONE);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, g_hDepthTexture, 0);
I would like to know how to use the depth buffer (probably how to choose the correct internal format / format) so that the texture values are not clamped.
Normalized integer image formats are always clamped. That's why they're normalized integers. If you want an unclamped format, then you need floating point values.
I would suggest using an actual 1-channel floating-point image format, such as GL_R32F. Maybe GL_R16F, depending on how much precision I need. If you don't have GL 3.x hardware, you may be able to use GL_LUMINANCE32F_EXT, depending on what extensions are available.
BTW, if you're doing this for deferred rendering, don't bother. You can actually calculate the eye-space point directly from the regular depth buffer. Yes, really.