NPOT support in OpenGL for R8G8B8 texture - c++

I have created a sample application using glew and glut which reads a dds file and displays it. I manually read the dds file (NPOT(886 x 317) file in R8G8B8) and creates the data pointer(unsigned char*).
Then I prepared the texture using
void prepareTexture(int w, int h, unsigned char* data) {
/* Create and load texture to OpenGL */
glGenTextures(1, &textureID); /* Texture name generation */
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB,
w, h,
0, GL_RGB, GL_UNSIGNED_BYTE,
data);
glGenerateMipmap(GL_TEXTURE_2D);
}
In the above figure, First one shows the original dds file and second one is the rendering result of my application which is obviously wrong. If I re-size the image to 1024 x 512, both images will look same.
From the OpenGL Specification
I.3 Non-Power-Of-Two Textures
The restriction of textures to power-of-two dimensions has been relaxed for all texture targets, so
that non-power-of-two textures may be specified without generating
errors. Non-power-of-two textures was promoted from the ARB texture
non power of two extension.
From which what I understand is from OpenGl 2.0 we can use NPOT textures and OpenGL will handle this.
I tried using the DevIL image library to load the dds file but end up with same result. If I convert the image to a RGBA and and change the internal format and format of glTexImage2D to GL_RGBA I will get correct result even if the dds file is NPOT.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA,
w, h,
0, GL_RGBA, GL_UNSIGNED_BYTE,
data);
I tried the application in PC's with NVIDA card and Radeon card and both of them are giving the same result.
My sample source code can be downloaded from the link
Can anybody tell me what is wrong with my application? Or OpenGL does not allow NPOT if the image is in R8G8B8.

This looks like an alignment issue. Add this before the glTexImage2D() call:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
This value specifies the row alignment of your data in bytes. The default value is 4.
With your texture width of 886 and 3 bytes per pixel for GL_RGB, each row is 886 * 3 = 2658 bytes, which is not a multiple of 4.
With the UNPACK_ALIGNMENT value at the default, the size would be rounded up to the next multiple of 4, which is 2660. So 2660 bytes will be read for each row, which explains the increasing shift for each row. The first row would be correct, the second one 2 bytes off, the 2nd row 4 bytes off, the 3rd row 6 bytes off, etc.

Related

OpenGL compressed texture views valid formats

What internal format combinations would work for this following code example, if my intention is to have raw storage allocated as a non compressed texture and the texture view interpreting this as BC5 / RGTC ?
GLuint texId;
glGenTextures(1, &texId);
glBindTexture(GL_TEXTURE_3D, texId);
glTexStorage3D(GL_TEXTURE_3D, 1, GL_RGBA32UI, 4, 4, 16);
glBindTexture(GL_TEXTURE_3D, 0);
assertNoError();
GLuint viewId;
glGenTextures(1, &viewId);
glTextureView(viewId, GL_TEXTURE_3D, texId, GL_COMPRESSED_RG_RGTC2, 0, 1, 0, 1);
assertNoError();
glDeleteTextures(1, &viewId);
glDeleteTextures(1, &texId);
assertNoError();
This example failed with INVALID_OPERATION and the GL debug output message says:
Internal formats neither compatible nor identical.
To narrow my question by exclusion:
glCompressed* with pixel unpack buffer is not an option.
TexStorage cannot have the compressed internal format. This is GL 4.5 and that has been removed.
OpenGL spec says this following pair is compatible: GL_RGTC2_RG, GL_COMPRESSED_RG_RGTC2. However GL_RGTC2_RG is not a GL define or defined value in any header or the spec.
You cannot allocate storage of a non-compressed format and view it with a compressed format. Or vice-versa. You can copy between compressed and uncompressed formats via glCopyImageSubData. But you can't do the kind of "casting" that you're trying to do.
Furthermore:
TexStorage cannot have the compressed internal format. This is GL 4.5 and that has been removed.
You cannot use generic compressed image formats, but specific formats (like GL_COMPRESSED_RG_RGTC2) are still available. Just not for 3D textures (BPTC can work with 3D textures, but not RGTC).
Vulkan has a mechanism for creating a VkImage of a compressed format from which you can then create a VkImageView with an appropriate uncompressed format (the reverse isn't allowed, but that doesn't really matter all that much). To do this, the image has to be created with the VK_IMAGE_CREATE_BLOCK_TEXEL_VIEW_COMPATIBLE_BIT creation flag, and the view must use a 32-bit unsigned int format, with sufficient components for each pixel of the view to correspond to the block byte size for the format.

How to only read R and G components from the fragment shader output?

I am using offscreen rendering to texture for a simple GPU calculation. I am using
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, texSize, texSize, 0, GL_RGBA, GL_FLOAT, nullptr);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texture, 0);
to allocate storage for the texture and
glReadPixels(0, 0, texSize, texSize, GL_RGBA, GL_FLOAT, data);
to read out the computed data. The problem is that the output from the fragment shader I am interested in is only vec2, so the first two slots of the color attachment are populated and the other two are garbage. I then need to post-process data to only take two out of each four floats, which takes needless cycles and storage.
If it was one value, I'd use GL_RED, if it was three, I'd use GL_RGB in my glReadPixels. But I couldn't find a format that would read two values. I'm only using GL_RGBA for convenience as it seems more natural to take 2 floats out of 2×2 than out of 3.
Is there another way which would read all the resulting vec2 tightly packed? I thought of reading RED only, somehow convincing OpenGL to skip four bytes after each value, and then reading GREEN only into the same array to fill in the gaps. To this end I tried to study about glPixelStore but it does not seem to be for this purpose. Is this, or any other way, even possible?
If you only want to read the RG components of the image, you use a transfer format of GL_RG in your glReadPixels command.
However, that's going to be a slow read unless your image also only stores 2 channels. So your image's internal format should be GL_RG32F.

OpenGL FBO colour attachment shrinking

I have an FBO with 4 frame buffer textures. These textures are 4 different sizes.
texture 1 = 512 * 360
texture 2 = 256 * 180
texture 3 = 128 * 90
texture 4 = 64 * 45
The problem is that if texture 4 and texture 1 for example are attached using:
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT2, GL_TEXTURE_2D, bl_64, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, bl_128, 0);
When I draw to texture 1 the image on the texture only takes up the size of texture 4.
m_blurrFBO.DrawingBind();
glUniformSubroutinesuiv(GL_FRAGMENT_SHADER, 1, &BlurrPass);
glViewport(0, 0, 512, 360);
glDrawBuffer(GL_COLOR_ATTACHMENT0);
DrawQuad();
FBO::UnbindDrawing();
The reason I'm using 4 different textures with different sizes is that I'm down sampling the same image 4 times each half the size as the last.
The problem is the code has been tested on 5 different computers all with either AMD or NVIDIA cards and it works as expected. I have the up to date drivers for my nvidia gtx 550 is this a known problem ?
This problem is called "That's how OpenGL works."
The total size of a framebuffer is based on the smallest size, in each dimension, of all of the attached images. You are not allowed to render outside of any attached image. Even if you use write masking or draw buffers state so that you don't actually render to it, the available viewport size is always limited to the smallest size of the attached images.
As a general rule, do not attach an image to a framebuffer unless you are serious about rendering to it. If you want to do downsampling, swap FBOs or change attached images between each sampling pass.
Oh and BTW: it is undefined behavior (unless you're using GL 4.5 or ARB/NV_texture_barrier) to read from any texture object that is currently attached to the FBO. Again, write masks and draw buffers state is irrelevant; what matters is that the image is attached. So again, don't attach something unless you are writing to it.

OpenGL / Qt - image distorted when displayed the video [duplicate]

I have created a sample application using glew and glut which reads a dds file and displays it. I manually read the dds file (NPOT(886 x 317) file in R8G8B8) and creates the data pointer(unsigned char*).
Then I prepared the texture using
void prepareTexture(int w, int h, unsigned char* data) {
/* Create and load texture to OpenGL */
glGenTextures(1, &textureID); /* Texture name generation */
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB,
w, h,
0, GL_RGB, GL_UNSIGNED_BYTE,
data);
glGenerateMipmap(GL_TEXTURE_2D);
}
In the above figure, First one shows the original dds file and second one is the rendering result of my application which is obviously wrong. If I re-size the image to 1024 x 512, both images will look same.
From the OpenGL Specification
I.3 Non-Power-Of-Two Textures
The restriction of textures to power-of-two dimensions has been relaxed for all texture targets, so
that non-power-of-two textures may be specified without generating
errors. Non-power-of-two textures was promoted from the ARB texture
non power of two extension.
From which what I understand is from OpenGl 2.0 we can use NPOT textures and OpenGL will handle this.
I tried using the DevIL image library to load the dds file but end up with same result. If I convert the image to a RGBA and and change the internal format and format of glTexImage2D to GL_RGBA I will get correct result even if the dds file is NPOT.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA,
w, h,
0, GL_RGBA, GL_UNSIGNED_BYTE,
data);
I tried the application in PC's with NVIDA card and Radeon card and both of them are giving the same result.
My sample source code can be downloaded from the link
Can anybody tell me what is wrong with my application? Or OpenGL does not allow NPOT if the image is in R8G8B8.
This looks like an alignment issue. Add this before the glTexImage2D() call:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
This value specifies the row alignment of your data in bytes. The default value is 4.
With your texture width of 886 and 3 bytes per pixel for GL_RGB, each row is 886 * 3 = 2658 bytes, which is not a multiple of 4.
With the UNPACK_ALIGNMENT value at the default, the size would be rounded up to the next multiple of 4, which is 2660. So 2660 bytes will be read for each row, which explains the increasing shift for each row. The first row would be correct, the second one 2 bytes off, the 2nd row 4 bytes off, the 3rd row 6 bytes off, etc.

Can I use a grayscale image with the OpenGL glTexImage2D function?

I have a texture which has only 1 channel as it's a grayscale image. When I pass the pixels in to glTexImage2D, it comes out red (obviously because channel 1 is red; RGB).
glTexImage2D(
GL_TEXTURE_2D, 0, GL_RGBA,
dicomImage->GetColumns(), dicomImage->GetRows(),
0, GL_RGBA, GL_UNSIGNED_BYTE, pixelArrayPtr);
Do I change GL_RGBA? If so, what to?
Change it to GL_LUMINANCE. See https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glTexImage2D.xhtml
in the FragmentShader, you can write:
uniform sampler2D A;
vec3 result = vec3(texture(A, TexCoord).r);
in the cpp file,you can write:
glTexImage2D(
GL_TEXTURE_2D, 0, GL_RED,
dicomImage->GetColumns(), dicomImage->GetRows(),
0, GL_RED, GL_UNSIGNED_BYTE, pixelArrayPtr);
It appears that I should use GL_LUMINANCE instead of GL_RGBA for the 3rd argument.
Edit (in reply to comments):
When I set the 7th argument to GL_LUMINANCE (as well as the 3rd), the picture goes completely distorted. With the DICOM pixel format, it appears that the 7th argument must be GL_RGBA for some reason.
The strange behavior is because I'm using the DICOM standard. The particular DICOM reader I am using outputs integer pixel values (as pixel values may exceed the normal maximum of 255). For some strange reason the combination of telling OpenGL that I am using an RGBA format, but passing in integer values rendered a perfect image.
Because I was truncating the DICOM > 255 pixel values anyway, it seemed logical to copy the values in to a GLbyte array. However, after doing so, a SIGSEGV (segmentation fault) occurred when calling glTexImage2D. Changing the 7th parameter to GL_LUMINANCE (as is normally required) returned the functionality to normal.
Weird eh?
So, a note to all developers using the DICOM image format: You need to convert the integer array to a char array before passing it to glTexImage2D, or just set the 7th argument to GL_RGBA (the later is probably not recommended).
You would use GL_LUMINANCE format in old versions of openGL, but now in modern 3.0+ OpenGL versions GL_LUMINANCE is deprecated, so new way of doing it is to use GL_RED format, but that would result in a red texture, so to get around this you should create a costum shader as above answers have shown, in that shader you grab red component of the texture, as it's the only one with data you have given and set green/blue channels to red channel's value, that will convert is to grayscale, because grayscale textures have all 3 RGB channels the same and Alpha/Transparency channel set to 1.