How I can display in OpenGL an image using the system color profile? - c++

I'm loading a texture using OpenGL like this
glTexImage2D(
GL_TEXTURE_2D,
0,
GL_RGBA,
texture.width,
texture.height,
0,
GL_RGBA,
GL_UNSIGNED_BYTE,
texture.pixels.data());
The issue is that the color of the image looks different from the one I see when I open the file on the system image viewer.
On the screenshot you can see the yellow on the face displayed on the system image viewer has the color #FEDE57 but the one that is displayed in the OpenGL window is #FEE262
Is there any flag or format I could use to match the same color calibration?
Displaying this same image as a Vulkan texture looks fine, so I can discard there is not an issue in how I load the image data.

In the end it seems like the framebuffer in OpenGL doesn't gets color corrected, so you have to tell the OS to do it for you
#include <Cocoa/Cocoa.h>
void prepareNativeWindow(SDL_Window *sdlWindow)
{
SDL_SysWMinfo wmi;
SDL_VERSION(&wmi.version);
SDL_GetWindowWMInfo(sdlWindow, &wmi);
NSWindow *win = wmi.info.cocoa.window;
[win setColorSpace:[NSColorSpace sRGBColorSpace]];
}
I found this solution here https://github.com/google/filament/blob/main/libs/filamentapp/src/NativeWindowHelperCocoa.mm

#Tokenyet and #t.niese are pretty much correct.
You need to approximately power you final colour's rgb values by 1.0/2.2. Something on the line of this:
FragColor.rgb = pow(fragColor.rgb, vec3(1.0/gamma)); //gamma --> float = 2.2
Note: this should be the final/last statement in the fragment shader. Do all your lighting and colour calculations before this, or else the result will be weird because you will be mixing linear and non-linear lighting (calculations).
The reason you need to do gamma correction is because the human eye perceives colour differently to what the computer outputs.
If the light intensity (lux) increases by twice the amount, your eye indeed sees it twice as bright. However, the actual brightness, when increased by twice the amount, increases in a logarithmic (or exponential?, someone please correct me here) relationship. The constant of proportionality between the two lighting spaces is ^2.2 (or ^(1.0/2.2) if you want to go the inverse (which is what you are looking for.)).
For more info: Look at this great tutorial on gamma correction!
Note 2: This is an approximation. Each computer, program, API have their own auto gamma correction method. You system image viewer may have different gamma correction methods (or not even have any for that matter) compared to OpenGL
Note 3: Btw, if this does not work, there are manual methods to adjust the colour in the fragment shader, if you know.
#FEDE57 = RGB(254, 222, 87)
which converted into OpenGL colour coordinates is,
(254, 222, 87) / 255 = vec3(0.9961, 0.8706, 0.3412)

Both images and displays have a gamma value.
If GL_FRAMEBUFFER_SRGB is not enabled then:
the system assumes that the color written by the fragment shader is in whatever colorspace the image it is being written to is. Therefore, no colorspace correction is performed.
( khronos: Framebuffer - Colorspace )
So in that case you need to figure out what the gamma value of the image you read in is and what the one of the output medium is and do the corresponding conversion between those.
To get the one of the output medium is however not always easy.
Therefore it is preferred to enable GL_FRAMEBUFFER_SRGB
If GL_FRAMEBUFFER_SRGB is enabled however, then if the destination image is in the sRGB colorspace […], then it will assume the shader's output is in the linear RGB colorspace. It will therefore convert the output from linear RGB to sRGB.
( khronos: Framebuffer - Colorspace )
So in that case you only need to ensure that the colors you set in the fragment shader don't have gamma correction applied but are linear.
So what you normally do is to get the gamma information of the image, which is done with a certain function of the library you use to read the image.
If the gamma of the image you read is gamma you can calculate the value to invert it with inverseGamme = 1. / gamma, and then you can use pixelColor.channel = std::pow(pixelColor.channel, inverseGamme) for each of the color channels and each pixel to make the color space linear.
You will use this values in the linear color space as texture data.
You could also use something like GL_SRGB8 for the texture, but then you would need to convert the values of the pixels you read form the image to sRGB colorspace, which roughly is done by first linearizing it and then applying a gamma of 2.2

Related

Colors in range [0, 255] doesn't correspond to colors in range [0, 1]

I am trying to implement in my shader a way of reading normals from a normal map. However, I found a problem when reading colors that prevents it.
I thought that one color such as (0, 0, 255) (blue) was equivalent to (0, 0, 1) in the shader. However, recently I found out that, for instance, if I pass a texture with the color (128, 128, 255), it is not equivalent to ~(0.5, 0.5, 1) in the shader.
In a fragment shader I write the following code:
vec3 col = texture(texSampler[0], vec2(1, 1)).rgb; // texture with color (128, 128, 255)
if(inFragPos.x > 0)
outColor = vec4(0.5, 0.5, 1, 1); // I get (188, 188, 255)
else
outColor = vec4(col, 1); // I get (128, 128, 255)
In x<0 I get the color (128, 128, 255), which is expected. But in x>0 I get the color (188, 188, 255), which I didn't expect. I expected both colors to be the same. What do I not know? What am I missing?
But in x>0 I get the color (188, 188, 255), which I didn't expect.
Did you render these values to a swapchain image, by chance?
If so, swapchain images are almost always in the sRGB colorspace. Which means that all floats written to them will be expected to be in a linear colorspace and therefore will be converted into sRGB.
If the source image was also in the sRGB colorspace, reading from it will reverse the transformation into a linear RGB colorspace. But since these are inverse transformations, the overall output you get will be the same as the input.
If you want to treat data in a texture as data rather than as colors, you must not use image formats that use the sRGB colorspace. And swapchain images are almost always sRGB, so you'll have to use a user-created image for such outputs.
Also, 128 will never yield exactly 0.5. 128/255 is slightly larger than 0.5.
After some research, I could solve it, so I will explain the solution. Nicol Bolas' answer shed some light on the problem too (thank you!).
In the old days, images were in (linear) RGB. Today, images are expected to be in (non-linear) sRGB. The sRGB color space gives more resolution to darker colors and less to lighter colors, because human eye distinguishes darker colors better.
Internet images (including normal maps) are almost always in sRGB by convention. When I analyze the colors of an image with Paint, I get the sRGB colors. When I pass that image as a texture to the shader, it is automatically converted to RGB (if you told Vulkan to do so), because the RGB color space is more appropriate for making operations with colors. Then, when the shader outputs the result, it automatically converts it back to sRGB.
My mistake was to consider the color information I got from the source image (using Paint) to be RGB, while it was really sRGB. When the color was converted to RGB in the shader, I was confused because I expected the same color I got in Paint. Since I want to use the texture as data rather than as color, I see 3 ways to solve this:
Save normals in a RGB image (tell Vulkan about this) (most correct option).
Transform the image to sRGB in the shader (my solution). Since the data was saved in an image as sRGB colors, it should be read in the shader as sRGB in order to get the correct data.
Now, talking about Vulkan, we have to specify the color space for the surface format and the swap chain (for instance: VK_COLOR_SPACE_SRGB_NONLINEAR_KHR). This way, the swapchain\display interprets the values when the image is presented. Also, we have to specify the color space of the Vulkan images we create.
References
Linear Vs Non-linear RGB: Great answer from Dan Hulme
Vulkan color space: Vulkan related info
Normal mapping 1 & Normal mapping 2

Convert SRGB texture to linear in OpenGL

I am currently trying to properly implement gamma correction in my renderer. I have set my framebuffer to be SRGB with glEnable(GL_FRAMEBUFFER_SRGB) and now I am left with importing the SRGB textures properly. I know three approaches to do this:
Convert value in shader: vec3 realColor = pow(sampledColor, 2.2)
Make OpenGL do it for me: glTexImage2D(..., ...,GL_SRGB, ..., ..., ..., GL_RGB, ..., ...);
Convert the values directly:
for (GLubyte* pixel = image; pixel < image + size; ++pixel)
*pixel = GLubyte(pow(*pixel, 2.2f) + 0.5f);
Now I'm trying to use the third approach, but it doesn't work.
It is super slow (I know it has to loop through all the pixels but still).
It makes everything look completely wrong (see image below).
Here are some images.
No gamma correction:
Method 2 (correction in when sampling in fragment shader)
Something weird when trying method 3
So now my question is what's wrong with method 3 cause it looks completely different from the correct result (assuming that method 2 is correct, which if I think it is).
I have set my framebuffer to be SRGB with glEnable(GL_FRAMEBUFFER_SRGB);
That doesn't set your framebuffer to a sRGB format - it only enables sRGB conversion if the framebuffer is using an sRGB format already - they only use of the GL_FRAMEBUFFER_SRGB enable state is to actually disable sRGB conversion on frambeuffers which have an sRGB format. You still have to specifically request your windows' default framebuffer to be sRGB capabable (or might be lucky to get one without asking for it, but that will differ greatly on implementations and platforms), or you have to create an sRGB texture or render-target if you render to an FBO.
Convert the values directly:
for (GLubyte* pixel = image; pixel < image + size; ++pixel)
*pixel = GLubyte(pow(*pixel, 2.2f) + 0.5f);
First of all pow(x,2.2) is not the correct formula for sRGB - the real one uses a small linear segment near 0 and the power of 2.4 for the rest - using a power of 2.2 is just some further approximation.
However, the bigger problem with this approach is that GLubyte is an 8 Bit unsigned integer type with the range [0,255] and doing a pow(...,2.2) on that yields a value in [0,196964.7], which when converted back to GLubyte will ignore the higher bits and basically calculate the modulo 256, so you will get really useless results. Conceptually, you need 255.0 * pow(x/255.0,2.2) which could of course be further simplified.
The big problem here is that by doing this conversion, you basically loose a lot of precision due to the non-linear distortion of your value range.
If you do such a conversion before-hand, you would have to use higher precision textures to store the linearized color values (like 16 bit half float per channel), just keeping the stuff as 8bit UNORM is a complete disaster - and that is also why GPUs do the conversion directly when accessing the texture, so that you don't have to blow up the memory footprint of your textures by a factor of 2.
So I really doubt that your approach 3 would be "importing the SRGB textures properly". It will just destroy any fidelity even if done right. Approaches1 and 2 do not have that problem, but approach 1 is just silly considering that the hardware will do that for you for free. so I really wonder why you even consider 1 and 3 at all.

Wrong blending in OpenGL on small alpha value

I draw from texture a lot of white traingles. But when it are drawing on yellow circle, the points which contains a small alpha value(but not equal with 0) are blended wrong, and I get some darker pixels on screen(see on screenshot, it was zoomed in). Which can be the problem?
On blue background all are ok.
As #tklausi pointed out in the comments, this problem was related to the texture interpolation in combination with traditional alpha blending. At the transition from values with high alpha to "background" with alpha=0, you will get some interpolation results where alpha is > 0, and RGB is mixed with your "background" color.
#tlkausi's solution was to change the RGB values of the background to white. But this will result in the same issue as before: If your actual image has dark colors, you will see bright artifacts around it then.
The correct solution would be to repeat the RGB color of the actual border pixels, so that the interpolation will always result in the same color, just with a lower alpha value.
However, there is a much better solution: premultiplied alpha.
Instead of storing (R,G,B,a) in the texture per pixel, you store (aR,aG,aB,a). When blending, you don't use a*source + (1-a) * background, but just source + (1-a)*background. The difference is that you now have a "neutral element" (0,0,0,0) and interpolation towards that will not pose any issue. It works nicely with filtering, and is also good for mipmapping and other techniques.
In general, I would recommend to always use premultiplied alpha in favor of the "traditional" one. The premultiplication can be directly applied into the image file, or you can do it at texture upload, but it does incur no runtime costs at all.
More information about premultiplied alpha can be found in this MSDN blog article or over here at NVIDIA.

GL_FRAMEBUFFER_SRGB_EXT banding problems (gamma correction)

Consider the following code. imageDataf is a float*. In fact, as the code shows it consist of float4 values created by a ray tracer. Of course, the color values are in linear space and I need them gamma corrected for output on screen.
So what I can do is a simple for loop with a gamma correction of 2.2 (see for loop). Also, i can use GL_FRAMEBUFFER_SRGB_EXT, which works almost correclty but has "banding" problems.
Left is using GL_FRAMEBUFFER_SRGB_EXT, right is manual gamma correction. Right picture looks perfect. There may be some difficulties to spot it on some monitors. Does anyone have a clue how to fix this problem? I would like to do gamma correction for "free" as the CPU version makes the GUI a bit laggy. Note that the actual ray tracing is done in another thread using GPU(optix) so in fact its about as fast in rendering performance.
GLboolean sRGB = GL_FALSE;
glGetBooleanv( GL_FRAMEBUFFER_SRGB_CAPABLE_EXT, &sRGB );
if (sRGB) {
//glEnable(GL_FRAMEBUFFER_SRGB_EXT);
}
for(int i = 0; i < 768*768*4; i++)
{
imageDataf[i] = (float)powf(imageDataf[i], 1.0f/2.2f);
}
glPixelStorei(GL_UNPACK_ALIGNMENT, 8);
glDrawPixels( static_cast<GLsizei>( buffer_width ), static_cast<GLsizei>( buffer_height ),
GL_RGBA, GL_FLOAT, (GLvoid*)imageDataf);
//glDisable(GL_FRAMEBUFFER_SRGB_EXT);
When GL_FRAMEBUFFER_SRGB is enabled, this means that OpenGL will assume that the colors for a fragment are in a linear colorspace. Therefore, when it writes them to an sRGB-format image, it will convert them internally from linear to sRGB. Except... your pixels are not linear. You already converted them to a non-linear colorspace.
However, I'll assume that you simply forgot an if statement in there. I'll assume that if the framebuffer is sRGB capable, you skip the loop and upload the data directly. So instead, I'll explain why you're getting banding.
You're getting banding because the OpenGL operation you asked for does the following. For each color you specify:
Clamp the floats to the [0, 1] range.
Convert the floats to unsigned, normalized, 8-bit integers.
Generate a fragment with that unsigned, normalized, 8-bit color.
Convert the unsigned, normalized, 8-bit fragment color from linear RGB space to sRGB space and store it.
Steps 1-3 all come from the use of glDrawPixels. Your problem is step 2. You want to keep your floating-point values as floats. Yet you insist on using the fixed-function pipeline (ie: glDrawPixels), which forces a conversion from float to unsigned normalized integers.
If you uploaded your data to a float texture and used a proper fragment shader to render this texture (even just a simple gl_FragColor = texture(tex, texCoord); shader), you'd be fine. The shader pipeline uses floating-point math, not integer math. So no such conversion would occur.
In short: stop using glDrawPixels.

How to render grayscale texture without using fragment shaders? (OpenGL)

Is it possible to draw a RGB texture as grayscale without using fragment shaders, using only fixed pipeline openGL?
Otherwise I'd have to create two versions of texture, one in color and one in black and white.
I don't know how to do this with an RGB texture and the fixed function pipeline.
If you create the texture from RGB source data but specify the internal format as GL_LUMINANCE, OpenGL will convert the color data into greyscale for you. Use the standard white material and MODULATE mode.
Hope this helps.
No. Texture environment combiners are not capable of performing a dot product without doing the scale/bias operation. That is, it always pretends that [0, 1] values are encoded as [-1, 1] values. Since you can't turn that off, you can't do a proper dot product.