GL_FRAMEBUFFER_SRGB_EXT banding problems (gamma correction) - opengl

Consider the following code. imageDataf is a float*. In fact, as the code shows it consist of float4 values created by a ray tracer. Of course, the color values are in linear space and I need them gamma corrected for output on screen.
So what I can do is a simple for loop with a gamma correction of 2.2 (see for loop). Also, i can use GL_FRAMEBUFFER_SRGB_EXT, which works almost correclty but has "banding" problems.
Left is using GL_FRAMEBUFFER_SRGB_EXT, right is manual gamma correction. Right picture looks perfect. There may be some difficulties to spot it on some monitors. Does anyone have a clue how to fix this problem? I would like to do gamma correction for "free" as the CPU version makes the GUI a bit laggy. Note that the actual ray tracing is done in another thread using GPU(optix) so in fact its about as fast in rendering performance.
GLboolean sRGB = GL_FALSE;
glGetBooleanv( GL_FRAMEBUFFER_SRGB_CAPABLE_EXT, &sRGB );
if (sRGB) {
//glEnable(GL_FRAMEBUFFER_SRGB_EXT);
}
for(int i = 0; i < 768*768*4; i++)
{
imageDataf[i] = (float)powf(imageDataf[i], 1.0f/2.2f);
}
glPixelStorei(GL_UNPACK_ALIGNMENT, 8);
glDrawPixels( static_cast<GLsizei>( buffer_width ), static_cast<GLsizei>( buffer_height ),
GL_RGBA, GL_FLOAT, (GLvoid*)imageDataf);
//glDisable(GL_FRAMEBUFFER_SRGB_EXT);

When GL_FRAMEBUFFER_SRGB is enabled, this means that OpenGL will assume that the colors for a fragment are in a linear colorspace. Therefore, when it writes them to an sRGB-format image, it will convert them internally from linear to sRGB. Except... your pixels are not linear. You already converted them to a non-linear colorspace.
However, I'll assume that you simply forgot an if statement in there. I'll assume that if the framebuffer is sRGB capable, you skip the loop and upload the data directly. So instead, I'll explain why you're getting banding.
You're getting banding because the OpenGL operation you asked for does the following. For each color you specify:
Clamp the floats to the [0, 1] range.
Convert the floats to unsigned, normalized, 8-bit integers.
Generate a fragment with that unsigned, normalized, 8-bit color.
Convert the unsigned, normalized, 8-bit fragment color from linear RGB space to sRGB space and store it.
Steps 1-3 all come from the use of glDrawPixels. Your problem is step 2. You want to keep your floating-point values as floats. Yet you insist on using the fixed-function pipeline (ie: glDrawPixels), which forces a conversion from float to unsigned normalized integers.
If you uploaded your data to a float texture and used a proper fragment shader to render this texture (even just a simple gl_FragColor = texture(tex, texCoord); shader), you'd be fine. The shader pipeline uses floating-point math, not integer math. So no such conversion would occur.
In short: stop using glDrawPixels.

Related

How I can display in OpenGL an image using the system color profile?

I'm loading a texture using OpenGL like this
glTexImage2D(
GL_TEXTURE_2D,
0,
GL_RGBA,
texture.width,
texture.height,
0,
GL_RGBA,
GL_UNSIGNED_BYTE,
texture.pixels.data());
The issue is that the color of the image looks different from the one I see when I open the file on the system image viewer.
On the screenshot you can see the yellow on the face displayed on the system image viewer has the color #FEDE57 but the one that is displayed in the OpenGL window is #FEE262
Is there any flag or format I could use to match the same color calibration?
Displaying this same image as a Vulkan texture looks fine, so I can discard there is not an issue in how I load the image data.
In the end it seems like the framebuffer in OpenGL doesn't gets color corrected, so you have to tell the OS to do it for you
#include <Cocoa/Cocoa.h>
void prepareNativeWindow(SDL_Window *sdlWindow)
{
SDL_SysWMinfo wmi;
SDL_VERSION(&wmi.version);
SDL_GetWindowWMInfo(sdlWindow, &wmi);
NSWindow *win = wmi.info.cocoa.window;
[win setColorSpace:[NSColorSpace sRGBColorSpace]];
}
I found this solution here https://github.com/google/filament/blob/main/libs/filamentapp/src/NativeWindowHelperCocoa.mm
#Tokenyet and #t.niese are pretty much correct.
You need to approximately power you final colour's rgb values by 1.0/2.2. Something on the line of this:
FragColor.rgb = pow(fragColor.rgb, vec3(1.0/gamma)); //gamma --> float = 2.2
Note: this should be the final/last statement in the fragment shader. Do all your lighting and colour calculations before this, or else the result will be weird because you will be mixing linear and non-linear lighting (calculations).
The reason you need to do gamma correction is because the human eye perceives colour differently to what the computer outputs.
If the light intensity (lux) increases by twice the amount, your eye indeed sees it twice as bright. However, the actual brightness, when increased by twice the amount, increases in a logarithmic (or exponential?, someone please correct me here) relationship. The constant of proportionality between the two lighting spaces is ^2.2 (or ^(1.0/2.2) if you want to go the inverse (which is what you are looking for.)).
For more info: Look at this great tutorial on gamma correction!
Note 2: This is an approximation. Each computer, program, API have their own auto gamma correction method. You system image viewer may have different gamma correction methods (or not even have any for that matter) compared to OpenGL
Note 3: Btw, if this does not work, there are manual methods to adjust the colour in the fragment shader, if you know.
#FEDE57 = RGB(254, 222, 87)
which converted into OpenGL colour coordinates is,
(254, 222, 87) / 255 = vec3(0.9961, 0.8706, 0.3412)
Both images and displays have a gamma value.
If GL_FRAMEBUFFER_SRGB is not enabled then:
the system assumes that the color written by the fragment shader is in whatever colorspace the image it is being written to is. Therefore, no colorspace correction is performed.
( khronos: Framebuffer - Colorspace )
So in that case you need to figure out what the gamma value of the image you read in is and what the one of the output medium is and do the corresponding conversion between those.
To get the one of the output medium is however not always easy.
Therefore it is preferred to enable GL_FRAMEBUFFER_SRGB
If GL_FRAMEBUFFER_SRGB is enabled however, then if the destination image is in the sRGB colorspace […], then it will assume the shader's output is in the linear RGB colorspace. It will therefore convert the output from linear RGB to sRGB.
( khronos: Framebuffer - Colorspace )
So in that case you only need to ensure that the colors you set in the fragment shader don't have gamma correction applied but are linear.
So what you normally do is to get the gamma information of the image, which is done with a certain function of the library you use to read the image.
If the gamma of the image you read is gamma you can calculate the value to invert it with inverseGamme = 1. / gamma, and then you can use pixelColor.channel = std::pow(pixelColor.channel, inverseGamme) for each of the color channels and each pixel to make the color space linear.
You will use this values in the linear color space as texture data.
You could also use something like GL_SRGB8 for the texture, but then you would need to convert the values of the pixels you read form the image to sRGB colorspace, which roughly is done by first linearizing it and then applying a gamma of 2.2

Convert SRGB texture to linear in OpenGL

I am currently trying to properly implement gamma correction in my renderer. I have set my framebuffer to be SRGB with glEnable(GL_FRAMEBUFFER_SRGB) and now I am left with importing the SRGB textures properly. I know three approaches to do this:
Convert value in shader: vec3 realColor = pow(sampledColor, 2.2)
Make OpenGL do it for me: glTexImage2D(..., ...,GL_SRGB, ..., ..., ..., GL_RGB, ..., ...);
Convert the values directly:
for (GLubyte* pixel = image; pixel < image + size; ++pixel)
*pixel = GLubyte(pow(*pixel, 2.2f) + 0.5f);
Now I'm trying to use the third approach, but it doesn't work.
It is super slow (I know it has to loop through all the pixels but still).
It makes everything look completely wrong (see image below).
Here are some images.
No gamma correction:
Method 2 (correction in when sampling in fragment shader)
Something weird when trying method 3
So now my question is what's wrong with method 3 cause it looks completely different from the correct result (assuming that method 2 is correct, which if I think it is).
I have set my framebuffer to be SRGB with glEnable(GL_FRAMEBUFFER_SRGB);
That doesn't set your framebuffer to a sRGB format - it only enables sRGB conversion if the framebuffer is using an sRGB format already - they only use of the GL_FRAMEBUFFER_SRGB enable state is to actually disable sRGB conversion on frambeuffers which have an sRGB format. You still have to specifically request your windows' default framebuffer to be sRGB capabable (or might be lucky to get one without asking for it, but that will differ greatly on implementations and platforms), or you have to create an sRGB texture or render-target if you render to an FBO.
Convert the values directly:
for (GLubyte* pixel = image; pixel < image + size; ++pixel)
*pixel = GLubyte(pow(*pixel, 2.2f) + 0.5f);
First of all pow(x,2.2) is not the correct formula for sRGB - the real one uses a small linear segment near 0 and the power of 2.4 for the rest - using a power of 2.2 is just some further approximation.
However, the bigger problem with this approach is that GLubyte is an 8 Bit unsigned integer type with the range [0,255] and doing a pow(...,2.2) on that yields a value in [0,196964.7], which when converted back to GLubyte will ignore the higher bits and basically calculate the modulo 256, so you will get really useless results. Conceptually, you need 255.0 * pow(x/255.0,2.2) which could of course be further simplified.
The big problem here is that by doing this conversion, you basically loose a lot of precision due to the non-linear distortion of your value range.
If you do such a conversion before-hand, you would have to use higher precision textures to store the linearized color values (like 16 bit half float per channel), just keeping the stuff as 8bit UNORM is a complete disaster - and that is also why GPUs do the conversion directly when accessing the texture, so that you don't have to blow up the memory footprint of your textures by a factor of 2.
So I really doubt that your approach 3 would be "importing the SRGB textures properly". It will just destroy any fidelity even if done right. Approaches1 and 2 do not have that problem, but approach 1 is just silly considering that the hardware will do that for you for free. so I really wonder why you even consider 1 and 3 at all.

Forward rendering multiple rendering passes

I'm trying to implement PBR into my simple OpenGL renderer and trying to use multiple lighting passes, I'm using one pass per light for rendering as follow:
1- First pass = depth
2- Second pass = ambient
3- [3 .. n] for all the lights in the scene.
I'm using the blending function glBlendFunc(GL_ONE, GL_ONE) for passes [3..n], and i'm doing a Gamma Correction at the end of each fragment shader.
But i still have a problem with the output image it just looks noisy specially when i'm using texture maps.
Is there anything wrong with those steps or is there any improvement to this process?
So basically, what you're calculating is
f(x) = a^gamma + b^gamma + ...
However, what you actually want (and as noted by #NicolBolas in the comments already) is
g(x) = (a + b + ...)^gamma
Now f(x) and g(x) will only equal each other in the rather useless cases like gamma=1. You simply cannot additively decompose a nonlinear function like power that way.
The correct solution is to blend everything together in linear space, and doing the gamma correction afterwards, on the total sum of the linear contributions of each light source.
However, implementing this will lead to a couple of technical issues. First and foremost, the standard 8 bit per channel are just not precise enough to store linear color values. Using such a format for the accumulation step will result in strongly visible color banding artifacts. There are two approaches to solve this:
Use a higher bit-per-channel format for the accumulation framebuffer. You will need a separate gamma correction pass, so you need to set up render-to-texture via a FBO. GL_RGBA16F appears as a particularily good format for this. Since you use a PBR lighting model, you can then also work with color values outside [0,1], and instead of a simple gamma correction, apply a proper tone mapping in the final pass. Note that while you may not need an alpha chanell, still use an RGBA format here, the RGB formats are simply not required color buffer formats by the GL spec, so they may not be supported universally.
Store the data still in 8 bit-per-component format, gamma corrected. The key here is that the blending must still be done in linear space, so the destination framebuffer color values must be re-linearized prior to blending. This can be achieved by using a framebuffer with GL_SRGB8_ALPHA8 format and enabling GL_FRAMEBUFFER_SRGB. In that case, the GPU will automatically apply the standard sRGB gamma correction when writing the fragment color to the framebuffer (which currently your fragment shader does), but it will also lead to the sRGB linearization when accessing to those values, including for blending. The OpenGL 4.6 core profile spec states in section "17.3.6.1 Blend equation":
If FRAMEBUFFER_SRGB is enabled and the value of FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING for the framebuffer attachment corresponding
to the destination buffer is SRGB (see section 9.2.3), the R, G, and B destination
color values (after conversion from fixed-point to floating-point) are considered to
be encoded for the sRGB color space and hence must be linearized prior to their
use in blending. Each R, G, and B component is converted in the same fashion
described for sRGB texture components in section 8.24.
Approach 1 will be the much more general approach, while approach 2 has a couple of drawbacks:
the linearization/delinerization is done multiple times, potentially wasting some GPU processing power
due to still using only 8 bit integer, the overall quality will be lower. After each blending step, the results are rounded to the next representable number, so you will get much more quantization noise.
you are still limited to color values in [0,1] and cannot (easily) do more interesting tone mapping and HDR rendering effects
However, approach 2 also has advantages:
you do not need a separate final gamma correction pass
if your platform / window system does support sRGB framebuffers, you can directly create an sRGB pixeformat/visual for your window, and do not need any rende-to-texture step at all. Basically, requesting an sRGB framebuffer and enabling GL_FRAMEBUFFER_SRGB will be enough to make this work.

Comparing two textures in openGL

I'm new to OpenGL and I'm looking forward to compare two textures to understand how much they are similar to each other. I know how to to this with two bitmap images but I really need to use a method to compare two textures.
Question is: Is there any way to compare two textures as we compare two images? Like comparing two images pixel by pixel?
Actually what you seem to be asking for is not possible or at least not as easy as it would seem to accomplish on the GPU. The problem is GPU is designed to accomplish as many small tasks as possible in the shortest amount of time. Iterating through an array of data such as pixels is not included so getting something like an integer or a floating value might be a bit hard.
There is one very interesting procedure you may try but I can not say the result will be appropriate for you:
You may first create a new texture that is a difference between the two input textures and then keep downsampling the result till 1x1 pixel texture and get the value of that pixel to see how different it is.
To achieve this it would be best to use a fixed size of the target buffer which is POT (power of two) for instance 256x256. If you didn't use a fixed size then the result could vary a lot depending on the image sizes.
So in first pass you would redraw the two textures to the 3rd one (using FBO - frame buffer object). The shader you would use is simply:
vec4 a = texture2D(iChannel0,uv);
vec4 b = texture2D(iChannel1,uv);
fragColor = abs(a-b);
So now you have a texture which represents the difference between the two images per pixel, per color component. If the two images will be the same, the result will be a totally black picture.
Now you will need to create a new FBO which is scaled by half in every dimension which comes to 128x128 in this example. To draw to this buffer you would need to use GL_NEAREST as a texture parameter so no interpolations on the texel fetching is done. Then for each new pixel sum the 4 nearest pixels of the source image:
vec4 originalTextCoord = varyingTextCoord;
vec4 textCoordRight = vec2(varyingTextCoord.x+1.0/256, varyingTextCoord.y);
vec4 textCoordBottom = vec2(varyingTextCoord.x, varyingTextCoord.y+1.0/256);
vec4 textCoordBottomRight = vec2(varyingTextCoord.x+1.0/256, varyingTextCoord.y+1.0/256);
fragColor = texture2D(iChannel0, originalTextCoord) +
texture2D(iChannel0, textCoordRight) +
texture2D(iChannel0, textCoordBottom) +
texture2D(iChannel0, textCoordBottomRight);
The 256 value is from the source texture so that should come as a uniform so you may reuse the same shader.
After this is drawn you need to drop down to 64, 32, 16... Then read the pixel back to the CPU and see the result.
Now unfortunately this procedure may produce very unwanted results. Since the colors are simply summed together this will produce an overflow for all the images which are not similar enough (results in a white pixel or rather (1,1,1,0) for non-transparent). This may be overcome first by using a scale on the first shader pass, to divide the output by a large enough value. Still this might not be enough and an average might need to be done in the second shader (multiply all the texture2D calls by .25).
In the end the result might still be a bit strange. You get 4 color components on the CPU which represent the sum or the average of an image differential. I guess you could sum them up and choose what you consider for the images to be much alike or not. But if you want to have a more sense in the result you are getting you might want to treat the whole pixel as a single 32-bit floating value (these are a bit tricky but you may find answers around the SO). This way you may compute the values without the overflows and get quite exact results from the algorithms. This means you would write the floating value as if it is a color which starts with the first shader output and continues for every other draw call (get texel, convert it to float, sum it, convert it back to vec4 and assign as output), GL_NEAREST is essential here.
If not then you may optimize the procedure and use GL_LINEAR instead of GL_NEAREST and simply keep redrawing the differential texture till it gets to a single pixel size (no need for 4 coordinates). This should produce a nice pixel which represents an average of all the pixels in the differential textures. So this is the average difference between pixels in the two images. Also this procedure should be quite fast.
Then if you want to do a bit smarter algorithm you may do some wonders on creating the differential texture. Simply subtracting the colors may not be the best approach. It would make more sense to blur one of the images and then comparing it to the other image. This will lose precision for those very similar images but for everything else it will give you a much better result. For instance you could say you are interested only if the pixel is 30% different then the weight of the other image (the blurred one) so you would discard and scale the 30% for every component such as result.r = clamp(abs(a.r-b.r)-30.0/100.0, .0, 1.0)/((100.0-30.0)/100.0);
You can bind both textures to a shader and visit each pixel by drawing a quad or something like this.
// Equal pixels are marked green. Different pixels are shown in red color.
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec2 uv = fragCoord.xy / iResolution.xy;
vec4 a = texture2D(iChannel0,uv);
vec4 b = texture2D(iChannel1,uv);
if(a != b)
fragColor = vec4(1,0,0,1);
else
fragColor = vec4(0,1,0,1);
}
You can test the shader on Shadertoy.
Or you can also bind both textures to a compute shader and visit every pixel by iteration.
You cannot compare vectors. You have to use
if( any(notEqual(a,b)))
Check the GLSL language spec

What exactly is a floating point texture?

I tried reading the OpenGL ARB_texture_float spec, but I still cannot get it in my head..
And how is floating point data related to just normal 8-bit per channel RGBA or RGB data from an image that I am loading into a texture?
Here is a read a little bit here about it.
Basically floating point texture is a texture in which data is of floating point type :)
That is it is not clamped. So if you have 3.14f in your texture you will read the same value in the shader.
You may create them with different numbers of channels. Also you may crate 16 or 32 bit textures depending on the format. e.g.
// create 32bit 4 component texture, each component has type float
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, 16, 16, 0, GL_RGBA, GL_FLOAT, data);
where data could be like this:
float data[16][16];
for(int i=0;i<16*16;++i) data[i] = sin(i*M_PI/180.0f); // whatever
then in shader you can get exactly same (if you use FLOAT32 texture) value.
e.g.
uniform sampler2D myFloatTex;
float value = texture2D(myFloatTex, texcoord.xy);
If you were using 16bit format, say GL_RGBA16F, then whenever you read in shader you will have a convertion. So, to avoid this you may use half4 type:
half4 value = texture2D(my16BitTex, texcoord.xy);
So, basically, difference between the normalized 8bit and floating point texture is that in the first case your values will be brought to [0..1] range and clamped, whereas in latter you will receive your values as is ( except for 16<->32 conversion, see my example above).
Not that you'd probably want to use them with FBO as a render target, in this case you need to know that not all of the formats may be attached as a render target. E.g. you cannot attach Luminance and intensity formats.
Also not all hardware supports filtering of floating point textures, so you need to check it first for your case if you need it.
Hope this helps.
FP textures have a special designated range of internal formats (RGBA_16F,RGBA_32F,etc).
Regular textures store fixed-point data, so reading from them gives you [0,1] range values. Contrary, FP textures give you [-inf,+inf] range as a result (not necessarily with a higher precision).
In many cases (like HDR rendering) you can easily proceed without FP textures, just by transforming the values to fit in [0,1] range. But there are cases like deferred rendering when you may want to store, for example, world-space coordinate without caring about their range.