WGL - ChoosePixelFormat and sRGB framebuffer - opengl

I have a legacy OpenGL application which sets a pixel format based of ChoosePixelFormat instead of wglChoosePixelFormatARB or wglGetPixelFormatAttribivARB/wglGetPixelFormatAttribfvARB. ChoosePixelFormat doesn't allow to set a framebuffer color space explicitly using WGL_EXT_colorspace Despite that I have noticed that calling glEnable(GL_FRAMEBUFFER_SRB) provides a color space conversion when render a test texture. How does it work ? Does ChoosePixelFormat sets sRGB colorspace by default ?

You already answered your own question:
ChoosePixelFormat doesn't allow to set a framebuffer color space explicitly.
There is nothing in the rules for ChoosePixelFormat which forbids an implementation to return a pixel format which supports sRGB encoding. But it is also not required. You can't rely on anything here. But on a modern windows using a compositor ("Aero" or whatever microsoft likes to call it), you typically will see sRGB support, but still there is no guarantee.

Related

sRGB Conversion OpenGL

Let's say I load a texture from a jpg file that has the sRGB color profile and while loading there was no conversion to a linear RGB. When a rect is displayed on screen I don't see any problems with image. the color is the same as opening it in one of the editors.
The question, since there was no conversion the RGB values remained in sRGB range. How does hardware know that no conversion is needed or why it was not converted by the GPU. Basically why XYZ -> sRGB conversion didn't happen.
I am very confused since if I feed sRGB data into something that shall in the end convert it to sRGB again will change colors, but it doesn't.
First of all, OpenGL does NOT do any color conversion by default.
Let's start from the smallest configuration, consisting of an Input Image, Window Buffer (rendered by OpenGL) and a Monitor. Each might be defined by different color space, but in simplest case it will look like that:
[RGB image|sRGB] -> [OpenGL Window Buffer|sRGB] -> [Monitor|sRGB]
By default, monitors are configured to use an sRGB preset, even the ones supporting a wider color range, to avoid incorrect color output. Rendering into OpenGL Window Buffer by default doesn't perform any color conversion, so that if input image was in sRGB colorspace it will remain the same in OpenGL Window Buffer. OS Composer normally just copies OpenGL Window Buffer onto the screen - no color conversion is implied at this step too. So basically all steps just pass-through input image and you see result as expected if it was in sRGB colorspace.
Now consider another scenario: you are applying Input Image for texture-mapping onto 3D object with lighting enabled. Lighting equation makes sense only in linear RGB colorspaces, so that without extra configuration OpenGL will basically take non-linear sRGB image values, pass them as parameters to shading equations unmodified and write result into OpenGL Window Buffer, which will be passed-through to Monitor configured to sRGB colorspace. The result will be affordable, but physically incorrect.
To solve the problem with incorrect lighting, OpenGL has introduced sRGB-awareness features, so that user may say explicitly if Input Image is in sRGB or linear RGB colorspace and if result of GLSL program color values should be implicitly converted from linear RGB colorspace into non-linear sRGB colorspace. So that:
[GLSL Texture Input|sRGB -> linear RGB implicit conversion] ->
-> [GLSL Lighting Equation|linear RGB] ->
-> [GLSL output|linear RGB -> sRGB implicit conversion]
The steps with implicit conversion will be done ONLY if OpenGL has been explicitly configured in that way by using GL_SRGB8_ALPHA8 texture format and by using GL_FRAMEBUFFER_SRGB while rendering into offscreen or Window Buffer. The whole conception might be tricky to understand and even trickier to implement.
Note that in this context "linear RGB" actually means linearized sRGB colorspace, as it is often omitted what linear RGB actual means. This is because for color math (lighting and other) it is important only that RGB values are linear on input and output in any linear colorspace, but when speaking about implicit sRGB -> linear RGB and linear RGB -> sRGB conversion - these clearly rely on conversion of RGB values defined by sRGB and OpenGL specifications. In reality, there are more linear RGB colorspaces which do not represent sRGB colorspace.
Now consider that your Input Image is not in sRGB colorspace, but using some another RGB color profile. OpenGL doesn't give much other texture formats save linear RGB and sRGB, so that a proper conversion of such image into sRGB colorspace should be done by image reader or via special GLSL program performing colorspace conversion on-the-fly.
Now consider a Monitor being configured to non-sRGB profile like AdobeRGB. In this case passing-through sRGB image to OpenGL window will produce distorted colors. By letting Windows know that your monitor in another color profile you will help some applications like Photoshop to convert colors properly, but OpenGL knows nothing about these color profiles configured in Windows! It is an application that is responsible to apply color profile information to perform a proper color transformation (via special GLSL programs or by other means). By working with an Input Image in non-sRGB colorspace application will have also an alternative to perform non-sRGB -> sRGB -> another non-sRGB color conversion or to implement GLSL program which will perform color conversion without proxy sRGB (or via proxy XYZ) directly to target colorspace to avoid losses of color precision information due to transient conversions.
Supporting arbitrary color profiles in OpenGL viewer not designed to be an image viewer might involve too much complexity. Some systems, however, define several standard color spaces to perform implicit conversion by system composer - which is much more reliable than supporting arbitrary color profiles with their special lookup tables and formulas.
For instance, macOS defines NSWindow::setColorSpace property which allows application to specify explicitly in which colorspace Window Buffer is filled in, so that system itself performs necessary conversion to actual Monitor color profile. Android system defines a similar interface for supporting a new P3 Display color profile with an extended color range compared to the old sRGB color profile. This implies, however, that OpenGL renderer actually knows how to output result in this specific colorspace - which is another topic (there are also a set of extra OpenGL extensions helping developers in this direction)... So far I haven't heard about a similar API in Windows, but I might be missed something.

Effect of GL_SRGB8_ALPHA8 on texture interpolation?

OpenGL allows one to declare a texture as being in sRGB (as images typically are) by using GL_SRGB8_ALPHA8, which will cause OpenGL to convert the sRGB colors to linear RGB space when sampling the texture in GLSL. This is also known as "Gamma Correction".
I've also read that linear interpolation in textures will behave differently with GL_SRGB8_ALPHA8 as interpolation will supposedly happen in linear space aswell. What effect, if any, does this have? Does this mean that one should always use GL_SRGB8_ALPHA8 for textures, rather than doing one's own sRGB -> linear conversion via GLSL?
As a side note, this is what the OpenGL 4.5 core profile specification has to say about this (quoting from section "8.24 sRGB Texture Color Conversion"):
Ideally, implementations should perform
this color conversion on each sample prior to filtering but
implementations are allowed
to perform this conversion after filtering (though this
post-filtering approach
is inferior to converting from sRGB prior to filtering).
So the spec won't guarantee you the ideal behavior.
In fact, most of the images are in sRGB space. And if you don't do any special processing when load image data into OpenGL or in the shader while rendering you'll get "wrong" image - you're applying linear computation on non linear data. It appears darker than it should be when you render such an image.
However, if you do conversion to linear space you should also convert final rendered image back to sRGB space, because usually monitors have 2.2 gamma curve applied (to be compatible with CRT output as it was before LCD screens).
So, you either do it manually in shader, or use sRGB extensions, which are provided both for textures (to convert from sRGB to linear) and frame buffers (to automatically convert back from linear to sRGB). To get correct image you should have both conversions applied.
Enabling gamma correction and doing it right gives more natural and softer image. Check out this article https://learnopengl.com/#!Advanced-Lighting/Gamma-Correction for more detailed explanation

OpenGL sRGB framebuffer oddity

I'm using GLFW3 to create a context and I've noticed that the GLFW_SRGB_CAPABLE property seems to have no effect. Regardless of what I set it to, I always get sRGB conversion when GL_FRAMEBUFFER_SRGB is enabled. My understanding is that when GL_FRAMEBUFFER_SRGB is enabled, you get sRGB conversion only if the framebuffer is an sRGB format. To add to the confusion, if I check the GL_FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING I get GL_LINEAR regardless of what I set GLFW_SRGB_CAPABLE to. This doesn't appear to be an issue with GLFW. I created a window and context manually and was sure to set GL_FRAMEBUFFER_SRGB_CAPABLE_ARB to true.
I'm using a Nvidia GTX 760 with the 340.76 drivers. I'm checking the format like this:
glGetFramebufferAttachmentParameteriv(GL_FRAMEBUFFER, GL_FRONT_LEFT, GL_FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING, &enc);
This should return GL_SRGB, should it not? If it is applying sRGB correction regardless of what WGL_FRAMEBUFFER_SRGB_CAPABLE_ARB is set to, then is Nvidia's driver not broken? Nobody has noticed this until now?
It seems that this is only an issue with the default framebuffer, therefor it must be a bug in Nvidia's WGL implementation. I've pointed it out to them so hopefully it will be fixed.
With GLX (Linux), I experience the same behaviour as well. It will report linear despite it clearly rendering as sRGB. One way you can verify that it is in fact working is by using an sRGB texture with texel value 1, render this texture to your sRGB framebuffer and see that it shows a dark-grey square. (For comparison, you can see how it looks when the texture is not an sRGB texture - still with texel value 1, that should give a lighter-grey square).
You can see this example: https://github.com/julienaubert/glsrgb
Interestingly, with an OpenGL ES context, the (almost) same code does not render correctly.
There is a topic on nvidia's developer OpenGL forum:
https://devtalk.nvidia.com/default/topic/776591/opengl/gl_framebuffer_srgb-functions-incorrectly/

Does Direct3D11 include color space conversion?

I'm investigating Direct3D11 for displaying video output; in particular, I'm trying to figure out if there's a way to give a YUV surface to Direct3D11 and have it automatically (i.e. in hardware) convert it to RGB and present it as such. The documentation of DXGI_MODE_DESC states:
Because of the relaxed render target creation rules that Direct3D 11
has for back buffers, applications can create a
DXGI_FORMAT_B8G8R8A8_UNORM_SRGB render target view from a
DXGI_FORMAT_B8G8R8A8_UNORM swap chain so they can use automatic color
space conversion when they render the swap chain.
What does this "automatic color space conversion" refer to? Is there anything in Direct3D11 that does what I'm looking for or must this be performed either pre-render or through a shader?
When creating a texture in DX11 you can choose a number of formats that will inform shaders about the data structures. These formats belong to DXGI_FORMAT enum. They are basically various configurations of ARGB color format, so they allow you to specify for example a B8G8R8A8 or R16G16B16A16. There is however no option for YUV format.
The best thing you can do is passing your YUV data to the shader pipeline, "pretending" that they're RGB. Then in the pixel shader perform a conversion to the real RGB format. This solution should be sufficiently efficient because the conversion will be executed on GPU, in parallel for every visible pixel of your texture.

sRGB FBO render to texture

In my renderer, I produce an anti-aliased scene on a multisampled FBO, which is blitted to an FBO whose color attachment is a texture. The texture is then read during rendering to the framebuffer.
I'd like to update it so that I get gamma-correct results. The benefit of using an sRGB framebuffer is that it allows me to have a somewhat better color precision by storing nonlinear sRGB values directly in the framebuffer.
What I'm not sure about is what changes should I be making to get this, and what is being changed by the different settings.
It looks like extension ARB_framebuffer_sRGB is just dealing with reading and blending operations with sRGB framebuffers. In my situation I'll need to use a texture specifying a sRGB representation type, which means I'd be using extension EXT_texture_sRGB... using a linear texture format would disable the sRGB translation.
Edit: But I just saw this:
3) Should the ability to support sRGB framebuffer update and blending
be an attribute of the framebuffer?
RESOLVED: Yes. It should be a capability of some pixel formats
(mostly likely just RGB8 and RGBA8) that says sRGB blending can
be enabled.
This allows an implementation to simply mark the existing RGB8
and RGBA8 pixel formats as supporting sRGB blending and then
just provide the functionality for sRGB update and blending for
such formats.
Now I'm not so sure what to specify for my texture's pixel format.
Okay, and what about renderbuffers? the ARB_framebuffer_sRGB doc does not mention anything about renderbuffers. Is it possible to use glRenderbufferStorageMultisample with a sRGB format, so I can get sRGB storage enabled blending?
Also, what is the difference between GL_SRGB_ALPHA and GL_SRGB8_ALPHA8 when specifying the internal format for glTexImage2D?
What I'm not sure about is what changes should I be making to get this
That's because your question seems unsure about what you're trying to do.
The key to all of this stuff is to at all times know what your input data is and what your output data is.
Your first step is to know what is stored in each of your textures. Does a particular texture store linear data or data in the sRGB colorspace? If it stores linear data, then use one of the linear image formats. If it stores sRGB colorspace data, then use one of the sRGB image formats.
This ensures that you are fetching the data you want in your shaders. When it comes time to write/blend them to the framebuffer, you now need to decide how to handle that.
Your screen expects values that have been pre-gamma corrected to the gamma of the display device. As such, if you provide linear values, you will get incorrect color output.
However, sometimes, you want to write to intermediate values. For example, if you're doing forward or deferred rendering, you will write accumulated lighting to a floating-point buffer, then use HDR tone mapping to boil it down to a [0, 1] image for display. Post-processing techniques can again be used. Only the outputs to [0, 1] need to be to images in the sRGB colorspace.
When writing linear RGB values that you want converted into sRGB, you must enable GL_FRAMEBUFFER_SRGB. This is a special enable (note that textures don't have a way to turn off sRGB decoding) because sometimes, you want to write values that already are in sRGB. This is often the case for GUI interface widgets, which were designed and built using colors already in the sRGB colorspace.
I cover issues relating to writing gamma-correct values and reading them from textures in my tutorial series. The first one explains why gamma is important and does explicitly gamma correction in the shader. The second link covers how to use sRGB images, both in textures and framebuffers.
Okay, and what about renderbuffers? the ARB_framebuffer_sRGB doc does not mention anything about renderbuffers.
And why would it? ARB_framebuffer_sRGB is only interested in the framebuffer and the nature of images in it. It neither knows nor cares where those images come from. It doesn't care if it's talking about the default framebuffer, a texture attached to an FBO, a renderbuffer attached to an FBO, or something entirely new someone comes up with tomorrow.
The extension states what happens when the destination image is in the sRGB colorspace and when GL_FRAMEBUFFER_SRGB is enabled. Where that image comes from is up to you.
Also, what is the difference between GL_SRGB_ALPHA and GL_SRGB8_ALPHA8 when specifying the internal format for glTexImage2D?
One is sized. The other is not. In theory, GL_SRGB_ALPHA could give you any bitdepth the implementation wanted. It could give you 2 bits per component. You're giving the implementation freedom to pick what it wants.
In practice, I doubt you'll find a difference. That being said, always used sized internal formats whenever possible. It's good to be specific about what you want, and to prevent the implementation from doing something stupid. OpenGL even has some sized formats which are required to be supported explicitly as stated.