OpenGL - RGB vs SRGB texture created in GIMP - opengl

GIMP by default uses sRGB color space. I have created a two simple png textures using GIMP. The textures contain a single RGB color [0, 0, 128]. The first texture has been exported without the "save gamma" option, the second with the "save gamma" option. In my OpenGL application I load the textures using the following parameters:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, img.getWidth(), img.getHeight(), 0, GL_BGR, GL_UNSIGNED_BYTE, img.accessPixels());
As you can see I don't use sRGB as a texture internal format. Then I read a texture this way:
glGetTexImage(GL_TEXTURE_2D, 0, GL_RGB, GL_UNSIGNED_BYTE, pixels);
B component contains the same value - 128 for both textures. I expected to get B component which is equal 128 for the texture exported without the "save gamma" option and B component greater than 128 for the texture exported with "save gamma" option due to gamma curve. Are my assumption correct ? Maybe the Gimp doesn't export the textures as I assumed.

When I create a texture in GIMP I choose a color from the sRGB palette. The visible representation of the color [0,0,128] is already defined in sRGB color space. The "save gamma" option only adds metadata info.

Related

OpenGL: Passing a texture that has the external format GL_RED but the internal format GL_RGBA

So I have a texture that has the external format GL_RED, and the internal format GL_RGBA.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bitmap->width, bitmap->height, 0, layout, GL_UNSIGNED_BYTE, bitmap->data);
I would like to have the textured stored as (1,1,1,r) instead of (r,0,0,0).
I wouldn't like to recompute the entire bitmap as an RGBA one, and I don't want to create a new shader. Is it possible to tell OpenGL how to interpret the uploaded data?
You should avoid such divergences between internal format and the data you pass. If you want your texture to have a single color channel that is a normalized, unsigned byte, the correct way to spell that is with GL_R8 as the internal format. The texture will be stored as a single value of red, with the other channels getting filled in at texture access time with 0, 0, 1 in that order.
You can modify how texture data is accessed with the texture swizzle setting. This is a per-texture setting. If you want to receive the data in the shader as (1, 1, 1, r), you can do that with this swizzle setting:
GLint swizzleMask[] = {GL_ONE, GL_ONE, GL_ONE, GL_RED};
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_RGBA, swizzleMask);
Note that thos doesn't change how the data is "stored"; the texture will always be a single-channel, 8-bit unsigned normalized texture. It affects how the shader accesses the texture's data.
Note that you could do this within the shader itself, but really, it's easier to employ a swizzle mask.
Just use GL_RED for the internal format.
When you sample the texture in the shader, fill the rest of components (GBA, no R) with the value you wish.

OpenGL / Qt - image distorted when displayed the video [duplicate]

I have created a sample application using glew and glut which reads a dds file and displays it. I manually read the dds file (NPOT(886 x 317) file in R8G8B8) and creates the data pointer(unsigned char*).
Then I prepared the texture using
void prepareTexture(int w, int h, unsigned char* data) {
/* Create and load texture to OpenGL */
glGenTextures(1, &textureID); /* Texture name generation */
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB,
w, h,
0, GL_RGB, GL_UNSIGNED_BYTE,
data);
glGenerateMipmap(GL_TEXTURE_2D);
}
In the above figure, First one shows the original dds file and second one is the rendering result of my application which is obviously wrong. If I re-size the image to 1024 x 512, both images will look same.
From the OpenGL Specification
I.3 Non-Power-Of-Two Textures
The restriction of textures to power-of-two dimensions has been relaxed for all texture targets, so
that non-power-of-two textures may be specified without generating
errors. Non-power-of-two textures was promoted from the ARB texture
non power of two extension.
From which what I understand is from OpenGl 2.0 we can use NPOT textures and OpenGL will handle this.
I tried using the DevIL image library to load the dds file but end up with same result. If I convert the image to a RGBA and and change the internal format and format of glTexImage2D to GL_RGBA I will get correct result even if the dds file is NPOT.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA,
w, h,
0, GL_RGBA, GL_UNSIGNED_BYTE,
data);
I tried the application in PC's with NVIDA card and Radeon card and both of them are giving the same result.
My sample source code can be downloaded from the link
Can anybody tell me what is wrong with my application? Or OpenGL does not allow NPOT if the image is in R8G8B8.
This looks like an alignment issue. Add this before the glTexImage2D() call:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
This value specifies the row alignment of your data in bytes. The default value is 4.
With your texture width of 886 and 3 bytes per pixel for GL_RGB, each row is 886 * 3 = 2658 bytes, which is not a multiple of 4.
With the UNPACK_ALIGNMENT value at the default, the size would be rounded up to the next multiple of 4, which is 2660. So 2660 bytes will be read for each row, which explains the increasing shift for each row. The first row would be correct, the second one 2 bytes off, the 2nd row 4 bytes off, the 3rd row 6 bytes off, etc.

NPOT support in OpenGL for R8G8B8 texture

I have created a sample application using glew and glut which reads a dds file and displays it. I manually read the dds file (NPOT(886 x 317) file in R8G8B8) and creates the data pointer(unsigned char*).
Then I prepared the texture using
void prepareTexture(int w, int h, unsigned char* data) {
/* Create and load texture to OpenGL */
glGenTextures(1, &textureID); /* Texture name generation */
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB,
w, h,
0, GL_RGB, GL_UNSIGNED_BYTE,
data);
glGenerateMipmap(GL_TEXTURE_2D);
}
In the above figure, First one shows the original dds file and second one is the rendering result of my application which is obviously wrong. If I re-size the image to 1024 x 512, both images will look same.
From the OpenGL Specification
I.3 Non-Power-Of-Two Textures
The restriction of textures to power-of-two dimensions has been relaxed for all texture targets, so
that non-power-of-two textures may be specified without generating
errors. Non-power-of-two textures was promoted from the ARB texture
non power of two extension.
From which what I understand is from OpenGl 2.0 we can use NPOT textures and OpenGL will handle this.
I tried using the DevIL image library to load the dds file but end up with same result. If I convert the image to a RGBA and and change the internal format and format of glTexImage2D to GL_RGBA I will get correct result even if the dds file is NPOT.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA,
w, h,
0, GL_RGBA, GL_UNSIGNED_BYTE,
data);
I tried the application in PC's with NVIDA card and Radeon card and both of them are giving the same result.
My sample source code can be downloaded from the link
Can anybody tell me what is wrong with my application? Or OpenGL does not allow NPOT if the image is in R8G8B8.
This looks like an alignment issue. Add this before the glTexImage2D() call:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
This value specifies the row alignment of your data in bytes. The default value is 4.
With your texture width of 886 and 3 bytes per pixel for GL_RGB, each row is 886 * 3 = 2658 bytes, which is not a multiple of 4.
With the UNPACK_ALIGNMENT value at the default, the size would be rounded up to the next multiple of 4, which is 2660. So 2660 bytes will be read for each row, which explains the increasing shift for each row. The first row would be correct, the second one 2 bytes off, the 2nd row 4 bytes off, the 3rd row 6 bytes off, etc.

Opengl Rgba to grayscale from 1 component

Lets say I have a 32bbp pixel array, but I am using only the blue channel/component from the pixels. I need to upload this pixel array to a texture in a grayscale/luminance format. For example if a have a color (a:0,r:0,g:0,b:x) it needs to become (0,x,x,x) in the texture.
I am using Opengl v1.5
OpenGL up to version 2 had the texture internal format GL_LUMINANCE, which does exactly what you want.
In OpenGL-3 this was replaced with the internal format GL_R (GL_RED), which is a single component texture. In a shader you can use a swizzle like
gl_FrontColor.rgb = texture().rrr;
But there's also the option to set a "static" you may call it swizzle in the texture parameters:
glTexParameteri(GL_TEXTURE_…, GL_TEXTURE_SWIZZLE_R, GL_RED);
glTexParameteri(GL_TEXTURE_…, GL_TEXTURE_SWIZZLE_G, GL_RED);
glTexParameteri(GL_TEXTURE_…, GL_TEXTURE_SWIZZLE_B, GL_RED);

Can I use a grayscale image with the OpenGL glTexImage2D function?

I have a texture which has only 1 channel as it's a grayscale image. When I pass the pixels in to glTexImage2D, it comes out red (obviously because channel 1 is red; RGB).
glTexImage2D(
GL_TEXTURE_2D, 0, GL_RGBA,
dicomImage->GetColumns(), dicomImage->GetRows(),
0, GL_RGBA, GL_UNSIGNED_BYTE, pixelArrayPtr);
Do I change GL_RGBA? If so, what to?
Change it to GL_LUMINANCE. See https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glTexImage2D.xhtml
in the FragmentShader, you can write:
uniform sampler2D A;
vec3 result = vec3(texture(A, TexCoord).r);
in the cpp file,you can write:
glTexImage2D(
GL_TEXTURE_2D, 0, GL_RED,
dicomImage->GetColumns(), dicomImage->GetRows(),
0, GL_RED, GL_UNSIGNED_BYTE, pixelArrayPtr);
It appears that I should use GL_LUMINANCE instead of GL_RGBA for the 3rd argument.
Edit (in reply to comments):
When I set the 7th argument to GL_LUMINANCE (as well as the 3rd), the picture goes completely distorted. With the DICOM pixel format, it appears that the 7th argument must be GL_RGBA for some reason.
The strange behavior is because I'm using the DICOM standard. The particular DICOM reader I am using outputs integer pixel values (as pixel values may exceed the normal maximum of 255). For some strange reason the combination of telling OpenGL that I am using an RGBA format, but passing in integer values rendered a perfect image.
Because I was truncating the DICOM > 255 pixel values anyway, it seemed logical to copy the values in to a GLbyte array. However, after doing so, a SIGSEGV (segmentation fault) occurred when calling glTexImage2D. Changing the 7th parameter to GL_LUMINANCE (as is normally required) returned the functionality to normal.
Weird eh?
So, a note to all developers using the DICOM image format: You need to convert the integer array to a char array before passing it to glTexImage2D, or just set the 7th argument to GL_RGBA (the later is probably not recommended).
You would use GL_LUMINANCE format in old versions of openGL, but now in modern 3.0+ OpenGL versions GL_LUMINANCE is deprecated, so new way of doing it is to use GL_RED format, but that would result in a red texture, so to get around this you should create a costum shader as above answers have shown, in that shader you grab red component of the texture, as it's the only one with data you have given and set green/blue channels to red channel's value, that will convert is to grayscale, because grayscale textures have all 3 RGB channels the same and Alpha/Transparency channel set to 1.