OpenGL FBO depth format and type confusion - opengl

I'm a bit confused about the internal format, format and type. So what's about the depth attachment point?
If I'm using a RenderBuffer, I think this is the valid code if I don't want to use stencil:
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT32F, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRB);
However if I want to be able to read the depth values, I have to attach a texture to the depth attachment point. So I have to call a glTexImage2D function with parameters "internal format", "format" and "type".
In this case which internal format, format and type should I choose? Can I use the following combinations for a depth attachment? (in the order of: internal format, format and type)
GL_R32F, GL_RED, GL_FLOAT
GL_DEPTH_COMPONENT32, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT
GL_DEPTH_COMPONENT32F, GL_DEPTH_COMPONENT, GL_FLOAT
Is the GL_UNSIGNED_INT type valid for the 2nd case? What does that really mean? Will it allocate 4 bytes per fragment? In some tutorials they are using GL_UNSIGNED_BYTE for the type parameter. Which is the correct one?
Thanks
Edit:
Clarified my question about which parameters I'm interested in.

Depth values are not color values. As such, if you want to store depth values in a texture, the texture must use an internal format that contains depth information.
The pixel transfer format/type parameters, even if you're not actually passing data, must still be reasonable with respect to the internal format. Since the internal format contains depth information, your pixel transfer format must specify depth information: GL_DEPTH_COMPONENT.
As for the pixel transfer type, you should read 32F back as GL_FLOAT.

Related

OpenGL compressed texture views valid formats

What internal format combinations would work for this following code example, if my intention is to have raw storage allocated as a non compressed texture and the texture view interpreting this as BC5 / RGTC ?
GLuint texId;
glGenTextures(1, &texId);
glBindTexture(GL_TEXTURE_3D, texId);
glTexStorage3D(GL_TEXTURE_3D, 1, GL_RGBA32UI, 4, 4, 16);
glBindTexture(GL_TEXTURE_3D, 0);
assertNoError();
GLuint viewId;
glGenTextures(1, &viewId);
glTextureView(viewId, GL_TEXTURE_3D, texId, GL_COMPRESSED_RG_RGTC2, 0, 1, 0, 1);
assertNoError();
glDeleteTextures(1, &viewId);
glDeleteTextures(1, &texId);
assertNoError();
This example failed with INVALID_OPERATION and the GL debug output message says:
Internal formats neither compatible nor identical.
To narrow my question by exclusion:
glCompressed* with pixel unpack buffer is not an option.
TexStorage cannot have the compressed internal format. This is GL 4.5 and that has been removed.
OpenGL spec says this following pair is compatible: GL_RGTC2_RG, GL_COMPRESSED_RG_RGTC2. However GL_RGTC2_RG is not a GL define or defined value in any header or the spec.
You cannot allocate storage of a non-compressed format and view it with a compressed format. Or vice-versa. You can copy between compressed and uncompressed formats via glCopyImageSubData. But you can't do the kind of "casting" that you're trying to do.
Furthermore:
TexStorage cannot have the compressed internal format. This is GL 4.5 and that has been removed.
You cannot use generic compressed image formats, but specific formats (like GL_COMPRESSED_RG_RGTC2) are still available. Just not for 3D textures (BPTC can work with 3D textures, but not RGTC).
Vulkan has a mechanism for creating a VkImage of a compressed format from which you can then create a VkImageView with an appropriate uncompressed format (the reverse isn't allowed, but that doesn't really matter all that much). To do this, the image has to be created with the VK_IMAGE_CREATE_BLOCK_TEXEL_VIEW_COMPATIBLE_BIT creation flag, and the view must use a 32-bit unsigned int format, with sufficient components for each pixel of the view to correspond to the block byte size for the format.

How to only read R and G components from the fragment shader output?

I am using offscreen rendering to texture for a simple GPU calculation. I am using
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, texSize, texSize, 0, GL_RGBA, GL_FLOAT, nullptr);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texture, 0);
to allocate storage for the texture and
glReadPixels(0, 0, texSize, texSize, GL_RGBA, GL_FLOAT, data);
to read out the computed data. The problem is that the output from the fragment shader I am interested in is only vec2, so the first two slots of the color attachment are populated and the other two are garbage. I then need to post-process data to only take two out of each four floats, which takes needless cycles and storage.
If it was one value, I'd use GL_RED, if it was three, I'd use GL_RGB in my glReadPixels. But I couldn't find a format that would read two values. I'm only using GL_RGBA for convenience as it seems more natural to take 2 floats out of 2×2 than out of 3.
Is there another way which would read all the resulting vec2 tightly packed? I thought of reading RED only, somehow convincing OpenGL to skip four bytes after each value, and then reading GREEN only into the same array to fill in the gaps. To this end I tried to study about glPixelStore but it does not seem to be for this purpose. Is this, or any other way, even possible?
If you only want to read the RG components of the image, you use a transfer format of GL_RG in your glReadPixels command.
However, that's going to be a slow read unless your image also only stores 2 channels. So your image's internal format should be GL_RG32F.

OpenGL: Passing a texture that has the external format GL_RED but the internal format GL_RGBA

So I have a texture that has the external format GL_RED, and the internal format GL_RGBA.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bitmap->width, bitmap->height, 0, layout, GL_UNSIGNED_BYTE, bitmap->data);
I would like to have the textured stored as (1,1,1,r) instead of (r,0,0,0).
I wouldn't like to recompute the entire bitmap as an RGBA one, and I don't want to create a new shader. Is it possible to tell OpenGL how to interpret the uploaded data?
You should avoid such divergences between internal format and the data you pass. If you want your texture to have a single color channel that is a normalized, unsigned byte, the correct way to spell that is with GL_R8 as the internal format. The texture will be stored as a single value of red, with the other channels getting filled in at texture access time with 0, 0, 1 in that order.
You can modify how texture data is accessed with the texture swizzle setting. This is a per-texture setting. If you want to receive the data in the shader as (1, 1, 1, r), you can do that with this swizzle setting:
GLint swizzleMask[] = {GL_ONE, GL_ONE, GL_ONE, GL_RED};
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_RGBA, swizzleMask);
Note that thos doesn't change how the data is "stored"; the texture will always be a single-channel, 8-bit unsigned normalized texture. It affects how the shader accesses the texture's data.
Note that you could do this within the shader itself, but really, it's easier to employ a swizzle mask.
Just use GL_RED for the internal format.
When you sample the texture in the shader, fill the rest of components (GBA, no R) with the value you wish.

What is the difference between glGenTextures and glGenSamplers?

I am following a tutorial to handle loading in textures, it has this method in it :
void CTexture::CreateEmptyTexture(int a_iWidth, int a_iHeight, GLenum format)
{
glGenTextures(1, &uiTexture);
glBindTexture(GL_TEXTURE_2D, uiTexture);
if(format == GL_RGBA || format == GL_BGRA)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, a_iWidth, a_iHeight, 0, format, GL_UNSIGNED_BYTE, NULL);
// We must handle this because of internal format parameter
else if(format == GL_RGB || format == GL_BGR)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, a_iWidth, a_iHeight, 0, format, GL_UNSIGNED_BYTE, NULL);
else
glTexImage2D(GL_TEXTURE_2D, 0, format, a_iWidth, a_iHeight, 0, format, GL_UNSIGNED_BYTE, NULL);
glGenSamplers(1, &uiSampler);
}
glGenSamplers is undefined, i assume because it needs GL v3.3 or higher, the labs at my university have GL v3.2 so I can't use it.
I am struggling to work out the difference between glGenTextures and glGenSamplers, are they interchangable?
The can't be used interchangably. Texture objects and sampler objects are different things, but somewhat related in the GL.
A texture object contains the image data, so it represents what we typically call just "texture". However, traditionally, the texture object in the GL also contains the sampler state. This controls parameters influencing the actual sampling operation of the texture, like filtering, texture coordinate wrap modes, border color, LOD bias and so on. This is not part of what one usually thinks of when the term "texture" is mentioned.
This combination of texture data and sampler state in a single object is also not how GPUs work. The sampler state is totally independent of the texture image data. A texture can be sampled with GL_NEAREST flitering in one situation and with GL_LINEAR in some other situation. To reflect this, the GL_ARB_sampler_objects GL extension was created.
A sampler object contains only the state for sampling the texture. It does not contain the image data itself. If a sampler object is currently bound, the sampler state of the texture itself is completely overriden, so only the sampler object defines these parameters. If no sampler object is bound (sampler name 0), the old behavior is used, so that the per-texture sampling parameters are used.
Using sampler objects is not strictly necessary. In many use cases, the concept of defining the sampling parameters in the texture object itself is quite suitable. And you always can switch the state in the texture object between different draw calls. However, it can be more efficient to use samplers. If you use them, binding a new texture does not require the GL to update the sampler state. Also, with samplers, you can do tricks like binding the same texture to differen units, while using different sampling modes.

Can I use a grayscale image with the OpenGL glTexImage2D function?

I have a texture which has only 1 channel as it's a grayscale image. When I pass the pixels in to glTexImage2D, it comes out red (obviously because channel 1 is red; RGB).
glTexImage2D(
GL_TEXTURE_2D, 0, GL_RGBA,
dicomImage->GetColumns(), dicomImage->GetRows(),
0, GL_RGBA, GL_UNSIGNED_BYTE, pixelArrayPtr);
Do I change GL_RGBA? If so, what to?
Change it to GL_LUMINANCE. See https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glTexImage2D.xhtml
in the FragmentShader, you can write:
uniform sampler2D A;
vec3 result = vec3(texture(A, TexCoord).r);
in the cpp file,you can write:
glTexImage2D(
GL_TEXTURE_2D, 0, GL_RED,
dicomImage->GetColumns(), dicomImage->GetRows(),
0, GL_RED, GL_UNSIGNED_BYTE, pixelArrayPtr);
It appears that I should use GL_LUMINANCE instead of GL_RGBA for the 3rd argument.
Edit (in reply to comments):
When I set the 7th argument to GL_LUMINANCE (as well as the 3rd), the picture goes completely distorted. With the DICOM pixel format, it appears that the 7th argument must be GL_RGBA for some reason.
The strange behavior is because I'm using the DICOM standard. The particular DICOM reader I am using outputs integer pixel values (as pixel values may exceed the normal maximum of 255). For some strange reason the combination of telling OpenGL that I am using an RGBA format, but passing in integer values rendered a perfect image.
Because I was truncating the DICOM > 255 pixel values anyway, it seemed logical to copy the values in to a GLbyte array. However, after doing so, a SIGSEGV (segmentation fault) occurred when calling glTexImage2D. Changing the 7th parameter to GL_LUMINANCE (as is normally required) returned the functionality to normal.
Weird eh?
So, a note to all developers using the DICOM image format: You need to convert the integer array to a char array before passing it to glTexImage2D, or just set the 7th argument to GL_RGBA (the later is probably not recommended).
You would use GL_LUMINANCE format in old versions of openGL, but now in modern 3.0+ OpenGL versions GL_LUMINANCE is deprecated, so new way of doing it is to use GL_RED format, but that would result in a red texture, so to get around this you should create a costum shader as above answers have shown, in that shader you grab red component of the texture, as it's the only one with data you have given and set green/blue channels to red channel's value, that will convert is to grayscale, because grayscale textures have all 3 RGB channels the same and Alpha/Transparency channel set to 1.