Why is my OpenGL Texutre render darkened? (The RGB values are squared) - opengl

I've been using glow to do some opengl rendering in rust. It's going well, but rendering to a texture vs. rendering to an image gives me different results. I know a naive way to solve it, I just don't understand it.
The image on the right is rendering right to the screen, while the lower left is created by rendering the same mesh to a texture of the same size using the same draw function (passing the pixel data to egui's ColorImage). As you can see, it's too dark, and this is also true when I save the pixel data to a file (using rust's Image crate).
But, if a channel's value is full in the render, it's also full in the output, almost like the RGB values are squared in a range of 0-1.0 before being converted to RGB8. Sure enough, I tried undoing that in the pixel data (flipped_buffer[i1] = ((buffer[i2] as f32 / 255.).sqrt() * 255.).round() as u8;), and it started looking correct!
So WHY? It's such a specific thing, and I can even imagine it being a useful way of mapping colors (since eyes distinguish darker values better than brighter ones), but why did it happen here?
The code for writing to a texture is based on the tutorial here, but using glow in rust instead of C/C++, and using RGBA instead of RGB.
Texture Creation Exerpt:
let gl_texture = self.gl.create_texture()?;
self.gl.bind_texture(glow::TEXTURE_2D, Some(gl_texture));
self.gl.tex_image_2d(glow::TEXTURE_2D, 0, glow::RGBA as i32, width as i32, height as i32, 0, glow::RGBA, glow::UNSIGNED_BYTE, None);
self.gl.tex_parameter_i32(glow::TEXTURE_2D, glow::TEXTURE_MAG_FILTER, glow::NEAREST as i32);
self.gl.tex_parameter_i32(glow::TEXTURE_2D, glow::TEXTURE_MIN_FILTER, glow::NEAREST as i32);
Pixel Data Read Exerpt:
let mut buffer = vec![0 as u8; (width * height * 4) as usize];
self.gl.get_tex_image(
glow::TEXTURE_2D,
0,
glow::RGBA,
glow::UNSIGNED_BYTE,
glow::PixelPackData::Slice(buffer.as_mut_slice()));

You seem to have GL_FRAMEBUFFER_SRGB enabled somewhere. The effect of this is that when you render to the screen the linear values coming from the shader are gamma-compressed with the sRGB transfer function (which isn't exactly a square root, but you got the idea). When you render to a non-sRGB texture, the conversion isn't happening at all, because almost all 8-bit file formats and APIs assume that the 8-bit data is already in sRGB color-space. Consequently, when you save it to a file and open it with a viewer that expects it to be sRGB, it looks darker. That "egui" library seems to expect sRGB images too.
The solution is to change your internal texture format to GL_SRGB8_ALPHA8:
self.gl.tex_image_2d(glow::TEXTURE_2D, 0, glow::SRGB8_ALPHA8 as i32, width as i32, height as i32, 0, glow::RGBA, glow::UNSIGNED_BYTE, None);

Related

Colors in range [0, 255] doesn't correspond to colors in range [0, 1]

I am trying to implement in my shader a way of reading normals from a normal map. However, I found a problem when reading colors that prevents it.
I thought that one color such as (0, 0, 255) (blue) was equivalent to (0, 0, 1) in the shader. However, recently I found out that, for instance, if I pass a texture with the color (128, 128, 255), it is not equivalent to ~(0.5, 0.5, 1) in the shader.
In a fragment shader I write the following code:
vec3 col = texture(texSampler[0], vec2(1, 1)).rgb; // texture with color (128, 128, 255)
if(inFragPos.x > 0)
outColor = vec4(0.5, 0.5, 1, 1); // I get (188, 188, 255)
else
outColor = vec4(col, 1); // I get (128, 128, 255)
In x<0 I get the color (128, 128, 255), which is expected. But in x>0 I get the color (188, 188, 255), which I didn't expect. I expected both colors to be the same. What do I not know? What am I missing?
But in x>0 I get the color (188, 188, 255), which I didn't expect.
Did you render these values to a swapchain image, by chance?
If so, swapchain images are almost always in the sRGB colorspace. Which means that all floats written to them will be expected to be in a linear colorspace and therefore will be converted into sRGB.
If the source image was also in the sRGB colorspace, reading from it will reverse the transformation into a linear RGB colorspace. But since these are inverse transformations, the overall output you get will be the same as the input.
If you want to treat data in a texture as data rather than as colors, you must not use image formats that use the sRGB colorspace. And swapchain images are almost always sRGB, so you'll have to use a user-created image for such outputs.
Also, 128 will never yield exactly 0.5. 128/255 is slightly larger than 0.5.
After some research, I could solve it, so I will explain the solution. Nicol Bolas' answer shed some light on the problem too (thank you!).
In the old days, images were in (linear) RGB. Today, images are expected to be in (non-linear) sRGB. The sRGB color space gives more resolution to darker colors and less to lighter colors, because human eye distinguishes darker colors better.
Internet images (including normal maps) are almost always in sRGB by convention. When I analyze the colors of an image with Paint, I get the sRGB colors. When I pass that image as a texture to the shader, it is automatically converted to RGB (if you told Vulkan to do so), because the RGB color space is more appropriate for making operations with colors. Then, when the shader outputs the result, it automatically converts it back to sRGB.
My mistake was to consider the color information I got from the source image (using Paint) to be RGB, while it was really sRGB. When the color was converted to RGB in the shader, I was confused because I expected the same color I got in Paint. Since I want to use the texture as data rather than as color, I see 3 ways to solve this:
Save normals in a RGB image (tell Vulkan about this) (most correct option).
Transform the image to sRGB in the shader (my solution). Since the data was saved in an image as sRGB colors, it should be read in the shader as sRGB in order to get the correct data.
Now, talking about Vulkan, we have to specify the color space for the surface format and the swap chain (for instance: VK_COLOR_SPACE_SRGB_NONLINEAR_KHR). This way, the swapchain\display interprets the values when the image is presented. Also, we have to specify the color space of the Vulkan images we create.
References
Linear Vs Non-linear RGB: Great answer from Dan Hulme
Vulkan color space: Vulkan related info
Normal mapping 1 & Normal mapping 2

How I can display in OpenGL an image using the system color profile?

I'm loading a texture using OpenGL like this
glTexImage2D(
GL_TEXTURE_2D,
0,
GL_RGBA,
texture.width,
texture.height,
0,
GL_RGBA,
GL_UNSIGNED_BYTE,
texture.pixels.data());
The issue is that the color of the image looks different from the one I see when I open the file on the system image viewer.
On the screenshot you can see the yellow on the face displayed on the system image viewer has the color #FEDE57 but the one that is displayed in the OpenGL window is #FEE262
Is there any flag or format I could use to match the same color calibration?
Displaying this same image as a Vulkan texture looks fine, so I can discard there is not an issue in how I load the image data.
In the end it seems like the framebuffer in OpenGL doesn't gets color corrected, so you have to tell the OS to do it for you
#include <Cocoa/Cocoa.h>
void prepareNativeWindow(SDL_Window *sdlWindow)
{
SDL_SysWMinfo wmi;
SDL_VERSION(&wmi.version);
SDL_GetWindowWMInfo(sdlWindow, &wmi);
NSWindow *win = wmi.info.cocoa.window;
[win setColorSpace:[NSColorSpace sRGBColorSpace]];
}
I found this solution here https://github.com/google/filament/blob/main/libs/filamentapp/src/NativeWindowHelperCocoa.mm
#Tokenyet and #t.niese are pretty much correct.
You need to approximately power you final colour's rgb values by 1.0/2.2. Something on the line of this:
FragColor.rgb = pow(fragColor.rgb, vec3(1.0/gamma)); //gamma --> float = 2.2
Note: this should be the final/last statement in the fragment shader. Do all your lighting and colour calculations before this, or else the result will be weird because you will be mixing linear and non-linear lighting (calculations).
The reason you need to do gamma correction is because the human eye perceives colour differently to what the computer outputs.
If the light intensity (lux) increases by twice the amount, your eye indeed sees it twice as bright. However, the actual brightness, when increased by twice the amount, increases in a logarithmic (or exponential?, someone please correct me here) relationship. The constant of proportionality between the two lighting spaces is ^2.2 (or ^(1.0/2.2) if you want to go the inverse (which is what you are looking for.)).
For more info: Look at this great tutorial on gamma correction!
Note 2: This is an approximation. Each computer, program, API have their own auto gamma correction method. You system image viewer may have different gamma correction methods (or not even have any for that matter) compared to OpenGL
Note 3: Btw, if this does not work, there are manual methods to adjust the colour in the fragment shader, if you know.
#FEDE57 = RGB(254, 222, 87)
which converted into OpenGL colour coordinates is,
(254, 222, 87) / 255 = vec3(0.9961, 0.8706, 0.3412)
Both images and displays have a gamma value.
If GL_FRAMEBUFFER_SRGB is not enabled then:
the system assumes that the color written by the fragment shader is in whatever colorspace the image it is being written to is. Therefore, no colorspace correction is performed.
( khronos: Framebuffer - Colorspace )
So in that case you need to figure out what the gamma value of the image you read in is and what the one of the output medium is and do the corresponding conversion between those.
To get the one of the output medium is however not always easy.
Therefore it is preferred to enable GL_FRAMEBUFFER_SRGB
If GL_FRAMEBUFFER_SRGB is enabled however, then if the destination image is in the sRGB colorspace […], then it will assume the shader's output is in the linear RGB colorspace. It will therefore convert the output from linear RGB to sRGB.
( khronos: Framebuffer - Colorspace )
So in that case you only need to ensure that the colors you set in the fragment shader don't have gamma correction applied but are linear.
So what you normally do is to get the gamma information of the image, which is done with a certain function of the library you use to read the image.
If the gamma of the image you read is gamma you can calculate the value to invert it with inverseGamme = 1. / gamma, and then you can use pixelColor.channel = std::pow(pixelColor.channel, inverseGamme) for each of the color channels and each pixel to make the color space linear.
You will use this values in the linear color space as texture data.
You could also use something like GL_SRGB8 for the texture, but then you would need to convert the values of the pixels you read form the image to sRGB colorspace, which roughly is done by first linearizing it and then applying a gamma of 2.2

Why RGBA is making png image blackish?

I am trying to capture PNG image with transparent background. I have set GL_RGBA as format in glReadPixels. But the output PNG image looks a little blackish or with saturated color. If backgrouund is not transparent that is if I use GL_RGB format in glReadPixels expected image is captured.
Note: In both cases, I am capturing translucent(partially transparent) shaded cube. If cube is completely opaque, RGBA format works fine.
Any ideas as to why this is happening for transparent background?
Blackish image with RGBA format
Image with RGB format
The cube looks darker because it's semitransparent, and whatever you use to view the image blends the semitransparent parts of it with black background.
You might argue that the cube in the picture shouldn't be semitransparent since it's drawn on top of a completely opaque background, but the problem is that the widely-used
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
blending mode (which you seem to use as well) is known to produce incorrect alpha values when used to draw semitransparent objects.
Usually you can't see it because alpha is discarded when drawing to the default framebuffer, but it becomes prominent when inspecting outputs of glReadPixels.
As you noticed, to solve it you can simply discard the alpha component.
But if you for some reason need to have a proper blending without those problems, you need
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
Note that for this funciton to work, both source and destination images have to be in premultiplied alpha format. That is, you need to do color.rgb *= color.a
as a last step in your shaders.
The inverse operation (color.rgb /= color.a) can be used to convert an image back to the regular format, but if your images are completely opaque, this step is unnecessary.

Good way to deal with alpha channels in 8-bit bitmap? - OpenGL - C++

I am loading bitmaps with OpenGL to texture a 3d mesh. Some of these bitmaps have alpha channels (transparency) for some of the pixels and I need to figure out the best way to
obtain the values of transparency for each pixel
and
render them with the transparency applied
Does anyone have a good example of this? Does OpenGL support this?
First of all, it's generally best to convert your bitmap data to 32-bit so that each channel (R,G,B,A) gets 8 bits. When you upload your texture, specify a 32bit format.
Then when rendering, you'll need to glEnable(GL_BLEND); and set the blend function, eg: glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);. This tells OpenGL to mix the RGB of the texture with that of the background, using the alpha of your texture.
If you're doing this to 3D objects, you might also want to turn off back-face culling (so that you see the back of the object through the front) and sort your triangles back-to-front (so that the blends happen in the correct order).
If your source bitmap is 8-bit (ie: using a palette with one colour specified as the transparency mask), then it's probably easiest to convert that to RGBA, setting the alpha value to 0 when the colour matches your transparency mask.
Some hints to make things (maybe) look better:
Your alpha channel is going to be an all-or-nothing affair (either 0x00 or 0xff), so apply some blur algorithm to get softer edges, if that's what you're after.
For texels (texture-pixels) with an alpha of zero (fully transparent), replace the RGB colour with the closest non-transparent texel. When texture coordinates are being interpolated, they wont be blended towards the original transparency colour from your BMP.
If your pixmap are 8-bit single channel they are either grayscale or use a palette. What you first need to do is converting the pixmap data into RGBA format. For this you allocate a buffer large enough to hold a 4-channel pixmap of the dimensions of the original file. Then for each pixel of the pixmap use that pixel's value as index into the palette (look up table) and put that color value into the corresponding pixel in the RGBA buffer. Once finished, upload to OpenGL using glTexImage2D.
If your GPU supports fragment shaders (very likely) you can do that LUT transformation in the shader: Upload the 8-bit pixmal as a GL_RED or GL_LUMINANCE 2D texture. And upload the palette as a 1D GL_RGBA texture. Then in the fragment shader:
uniform sampler2D texture;
uniform sampler1D palette_lut;
void main()
{
float palette_index = texture2D(texture,gl_TexCoord[0].st).r;
vec4 color = texture1D(palette_lut, palette_index);
gl_FragColor = color;
}
Blended rendering conflicts with the Z buffer algorithm, so you must sort your geometry back-to-front for things to look properly. As long as this affects objects at a whole this is rather simple, but it becomes tedious if you need to sort the faces of a mesh rendering each and every frame. A method to avoid this is breaking down meshes into convex submeshes (of course a mesh that's convex already can not be broken down further). Then use the following method:
Enable face culling
for convex_submesh in sorted(meshes, far to near):
set face culling to front faces (i.e. the backside gets rendered)
render convex_submesh
set face culling to back faces (i.e. the fronside gets rendered)
render convex_submesh again

How to load OpenGL texture from ARGB NSImage without swizzling?

I'm writing an app for Mac OS >= 10.6 that creates OpenGL textures from images loaded from disk.
First, I load the image into an NSImage. Then I get the NSBitmapImageRep from the image and load the pixel data into a texture using glTexImage2D.
For RGB or RGBA images, it works perfectly. I can pass in either 3 bytes/pixel of RGB, or 4 bytes of RGBA, and create a 4-byte/pixel RGBA texture.
However, I just had a tester send me a JPEG image (shot on a Canon EOS 50D, not sure how it was imported) that seems to have ARGB byte ordering.
I found a post on this thread: (http://www.cocoabuilder.com/archive/cocoa/12782-coregraphics-over-opengl.html) That suggests that I specify a format parameter of GL_BGRA to
glTexImage2D, and a type of GL_UNSIGNED_INT_8_8_8_8_REV.
That seems logical, and seems like it should work, but it doesn't. I get different, but still wrong, color values.
I wrote "swizzling" (manual byte-swapping) code that shuffles the ARGB image data into a new RGBA buffer, but this byte-by-byte swizzling is going to be slow for large images.
I would also like to understand how to make this work "the right way".
What is the trick to loading ARGB data into an RGBA OpenGL texture?
My current call to xxx looks like this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, newWidth, newHeight, 0, format, GL_UNSIGNED_BYTE, pixelBuffer);
where is either RGB or RGBA.
I tried using:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, newWidth, newHeight, 0, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, pixelBuffer);
When my image rep's reports that it is in "alpha first" order.
As a second question, I've also read that most graphics card's "native" format is GL_BGRA, so creating a texture in that format results in faster texture drawing. The speed of texture drawing is more important than the speed of loading the texture, so "swizzling" the data to BGRA format up-front would be worth it. I tried asking OpenGL to create a BGRA texture by specifying an "internalformat" of GL_RGBA, but that results in a completely black image. My interpretation on the docs makes me expect that glTexImage2D would byte-swap the data as it reads it if the source and internal formats are different, but instead I get an OpenGL error 0x500 (GL_INVALID_ENUM) when I try to specify an "internalformat" of GL_RGBA. What am I missing?
I'm not aware of the way to load the ARGB data directly into the texture, but there is a better workaround than just doing the swizzle on CPU. You can do it very effectively on GPU instead:
Load the ARGB data into the temporary RGBA texture.
Draw a full-screen quad with this texture, while rendering into the target texture, using a simple pixel shader.
Continue to load other resources, no need to stall the GPU pipeline.
Example pixel shader:
#version 130
uniform sampler2DRect unit_in;
void main() {
gl_FragColor = texture( unit_in, gl_FragCoord.xy ).gbar;
}
You're rendering it with OpenGL, right?
If you want to do it the easy way, you can have your pixel shader swizzle the colors in realtime. This is no problem at all for the graphics card, they're made to do faar more complicated stuff :).
You can use a shader like this:
uniform sampler2D image;
void main()
{
gl_FragColor = texture2D(image, gl_FragCoord.xy).gbar;
}
If you don't know about shaders, read this tut here: http://www.lighthouse3d.com/opengl/glsl/
This question is old but in case anyone else is looking for this I found a not strictly safe but effective solution. The problem is that each 32-bit RGBA value has A as the first byte rather than the last.
NBitmapImageRep.bitmapData gives you a pointer to that first byte which you give to OpenGL as the pointer to its pixels. Simply add 1 to that pointer and you point at the RGB values in the right order, with the A of the next pixel at the end.
The problems with this are that the last pixel will take the A value from one byte beyond the end of the image and the A values are all one pixel out. But like the asker, I get this while loading a JPG so alpha is irrelevant anyway. This doesn't appear to cause a problem, but I wouldn't claim that its 'safe'.
The name of a texture whose data is in ARGB format.
GLuint argb_texture;
An array of tokens to set ARGB swizzle in one function call.
static const GLenum argb_swizzle[] =
{
GL_GREEN, GL_BLUE, GL_ALPHA, GL_RED
};
Bind the ARGB texture
glBindTexture(GL_TEXTURE_2D, argb_texture);
Set all four swizzle parameters in one call to glTexParameteriv
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_RGBA, argb_swizzle);
I know this work, but I am not sure if argb_swizzle is in right order. Please correct me if this is not right. I am not very clear how are GL_GREEN, GL_BLUE, GL_ALPHA, GL_RED determined in argb_swizzle.
As The OpenGL Programming Guide suggested:
...which is a mechanism that allows you to rearrange the component
order of texture data on the fly as it is read by the graphics
hardware.