When should I use GL_SRGB8 instead of GL_RGB8? - opengl

With respect to RGB and sRGB, are the following understandings I have (or not) true?
I read an authored image (a .png) into a GL_SRGB8 format texture.
(q) When sampling the texture from (1) the hardware will convert from sRGB to linear colour space?
I read an authored image into a GL_RGB8 texture.
(q) When sampling the texture from (2) the hardware will not convert from sRGB to linear?
I set GL_FRAMEBUFFER_SRGB to true at the final stage of my pipeline.
(q) When displaying the back buffer the hardware will convert from linear to sRGB colour space?
I have a pipeline with 5 stages, each writing to a floating point texture (GL_RGBA16F)
(q) The whole pipeline is linear until the final stage, provided (1) and (3) are true.

Essentially yes to all of them. However (3) has an additional constraint: For a framebuffer to do linear RGB to sRGB conversion, the color attachment must be in SRGB format itself, i.e. be a sRGB texture, renderbuffer or a main window with a sRGB pixelformat.

Related

sRGB Conversion OpenGL

Let's say I load a texture from a jpg file that has the sRGB color profile and while loading there was no conversion to a linear RGB. When a rect is displayed on screen I don't see any problems with image. the color is the same as opening it in one of the editors.
The question, since there was no conversion the RGB values remained in sRGB range. How does hardware know that no conversion is needed or why it was not converted by the GPU. Basically why XYZ -> sRGB conversion didn't happen.
I am very confused since if I feed sRGB data into something that shall in the end convert it to sRGB again will change colors, but it doesn't.
First of all, OpenGL does NOT do any color conversion by default.
Let's start from the smallest configuration, consisting of an Input Image, Window Buffer (rendered by OpenGL) and a Monitor. Each might be defined by different color space, but in simplest case it will look like that:
[RGB image|sRGB] -> [OpenGL Window Buffer|sRGB] -> [Monitor|sRGB]
By default, monitors are configured to use an sRGB preset, even the ones supporting a wider color range, to avoid incorrect color output. Rendering into OpenGL Window Buffer by default doesn't perform any color conversion, so that if input image was in sRGB colorspace it will remain the same in OpenGL Window Buffer. OS Composer normally just copies OpenGL Window Buffer onto the screen - no color conversion is implied at this step too. So basically all steps just pass-through input image and you see result as expected if it was in sRGB colorspace.
Now consider another scenario: you are applying Input Image for texture-mapping onto 3D object with lighting enabled. Lighting equation makes sense only in linear RGB colorspaces, so that without extra configuration OpenGL will basically take non-linear sRGB image values, pass them as parameters to shading equations unmodified and write result into OpenGL Window Buffer, which will be passed-through to Monitor configured to sRGB colorspace. The result will be affordable, but physically incorrect.
To solve the problem with incorrect lighting, OpenGL has introduced sRGB-awareness features, so that user may say explicitly if Input Image is in sRGB or linear RGB colorspace and if result of GLSL program color values should be implicitly converted from linear RGB colorspace into non-linear sRGB colorspace. So that:
[GLSL Texture Input|sRGB -> linear RGB implicit conversion] ->
-> [GLSL Lighting Equation|linear RGB] ->
-> [GLSL output|linear RGB -> sRGB implicit conversion]
The steps with implicit conversion will be done ONLY if OpenGL has been explicitly configured in that way by using GL_SRGB8_ALPHA8 texture format and by using GL_FRAMEBUFFER_SRGB while rendering into offscreen or Window Buffer. The whole conception might be tricky to understand and even trickier to implement.
Note that in this context "linear RGB" actually means linearized sRGB colorspace, as it is often omitted what linear RGB actual means. This is because for color math (lighting and other) it is important only that RGB values are linear on input and output in any linear colorspace, but when speaking about implicit sRGB -> linear RGB and linear RGB -> sRGB conversion - these clearly rely on conversion of RGB values defined by sRGB and OpenGL specifications. In reality, there are more linear RGB colorspaces which do not represent sRGB colorspace.
Now consider that your Input Image is not in sRGB colorspace, but using some another RGB color profile. OpenGL doesn't give much other texture formats save linear RGB and sRGB, so that a proper conversion of such image into sRGB colorspace should be done by image reader or via special GLSL program performing colorspace conversion on-the-fly.
Now consider a Monitor being configured to non-sRGB profile like AdobeRGB. In this case passing-through sRGB image to OpenGL window will produce distorted colors. By letting Windows know that your monitor in another color profile you will help some applications like Photoshop to convert colors properly, but OpenGL knows nothing about these color profiles configured in Windows! It is an application that is responsible to apply color profile information to perform a proper color transformation (via special GLSL programs or by other means). By working with an Input Image in non-sRGB colorspace application will have also an alternative to perform non-sRGB -> sRGB -> another non-sRGB color conversion or to implement GLSL program which will perform color conversion without proxy sRGB (or via proxy XYZ) directly to target colorspace to avoid losses of color precision information due to transient conversions.
Supporting arbitrary color profiles in OpenGL viewer not designed to be an image viewer might involve too much complexity. Some systems, however, define several standard color spaces to perform implicit conversion by system composer - which is much more reliable than supporting arbitrary color profiles with their special lookup tables and formulas.
For instance, macOS defines NSWindow::setColorSpace property which allows application to specify explicitly in which colorspace Window Buffer is filled in, so that system itself performs necessary conversion to actual Monitor color profile. Android system defines a similar interface for supporting a new P3 Display color profile with an extended color range compared to the old sRGB color profile. This implies, however, that OpenGL renderer actually knows how to output result in this specific colorspace - which is another topic (there are also a set of extra OpenGL extensions helping developers in this direction)... So far I haven't heard about a similar API in Windows, but I might be missed something.

Do I need output gamma correction in a fragment shader?

When I output a color via fragColor in the main() function of a fragment shader, are the color component intensities interpreted as linear RGB or sRGB? Or to put it differently, do I have to perform gamma correction in my shader or is this already being taken care of?
If there is no general answer but depends on some OpenGL property: How do I set this property?
To avoid misunderstanding: The respective color has been entirely programmatically generated, there are no textures or the like (which may necessitate input gamma correction).
Every fragment shader output is routed to a specific image in the framebuffer (based on glDrawBuffers state). That image has a format. That format defines the default colorspace interpretation of the corresponding output.
Therefore, if you are writing a value to an image in a linear RGB format, then the system will interpret that value as a linear RGB value. If you are writing a value to an image in an sRGB format, then the system will interpret that value as already being in the sRGB colorspace. So no conversion will be performed; it will be written to the texture as-is. And if blending is active, the sRGB value you wrote will be blended against the sRGB value in the image, in the sRGB colorspace.
Now, when it comes to writing to an sRGB colorspace image, you usually don't want that behavior. If you have a linear RGB value, you want the system to convert that value to the sRGB colorspace (rather than pretending it already is). Equally importantly, when blending is active, you want the value in the sRGB image to be converted into linear RGB, then blended with the linear value written by the fragment shader, and finally converted back into sRGB. Otherwise, blending yields results of dubious accuracy.
To make all that happen, you must glEnable(GL_FRAMEBUFFER_SRGB). When this feature is enabled, colorspace conversion happens for any fragment shader output written to any image whose colorspace is sRGB.
This value is disabled by default.
So if you want to write linear RGB values and have them converted to sRGB for display purposes, then you must:
Make sure that the framebuffer image you're writing to uses the sRGB colorspace. For FBOs, this is trivial. But if you're writing to the default framebuffer, then that's something you have to work out with your OpenGL initialization code.
Enable GL_FRAMEBUFFER_SRGB when you wish to use colorspace conversion.

8bit sRGB and OpenGL confusion

I would like to ask a clarification about the sRGB color space and its 8bit per channel representation in OpenGL. After digging in the subject, I found out that storing images in sRGB is just made to compensate the opposite gamma operation done by the monitor which "imitates" what the old CTR did by nature for compatibility reasons. The fact that the human eye has a similiar non linear response is just a coincidence and has nothing to do with gamma correction (as many confusing articles claim) as the output will be linear again anyway.
Assuming that this understanding is right, in OpenGL we have the GL_SRGB8_ALPHA8 format for textures, which has 8bit per channel. However since the range of 0~255 is the same as a linear RGB texture, does this mean that to convert a linear 8bit per channel texture to sRGB the color values remain unchanged and a simple "flag" tells OpenGL: Look this 0~255 range is not linear so interpret them as a curve?
What about sRGB 16bit per channel images (ex. 16bit PNGs) ?
The range is the same. The values are not.
Your linear values before storing are floating-point. So they have greater precision than an 8-bit-per-channel format.
If you store them in a linearRGB image, you're taking the input range [0, 1] and evenly mapping them to [0, 255].
But if you do sRGB conversion, then you're taking the [0, 1] range and mapping them to [0, 255] via a non-linear gamma mapping of approximately 2.2. While this non-linear mapping does not magically create more values, what it does do is effectively give you more precision in the higher parts of the range than the lower parts.
In sRGB conversion, values in the input range [0.5, 1] are mapped to [56, 255]. That's over 75% of the output range that's covered by 50% of the input range. This gives you a better representation of the larger values in your input.
A linear mapping loses precision evenly. The sRGB mapping loses more precision in the darker areas than the lighter. Or to put it another way, it preserves more precision in the lighter areas than the linear mapping.
For a memory vs. visual quality tradeoff, sRGB comes out better overall than linearRGB 8-bit-per-channel.
However since the range of 0~255 is the same as a linear RGB texture, does this mean that to convert a linear 8bit per channel texture to sRGB the color values remain unchanged and a simple "flag" tells OpenGL: Look this 0~255 range is not linear so interpret them as a curve?
It depends on what operation you're talking about.
An sRGB texture is a texture that stores its RGB information in the sRGB colorspace. However, shader operations are assumed to want data in the linearRGB colorspace, not sRGB. So using an sRGB format means that texture fetches will convert the pixels they read from sRGB to linearRGB.
Writes from a fragment shader to an FBO-attached image using an sRGB format may or may not perform conversion. Here, conversion has to be explicitly enabled with GL_FRAMEBUFFER_SRGB. The idea being that some operations will generate values in the sRGB colorspace (GUIs, for example. Most images were created in the sRGB colorspace), while others will generate values in linearRGB (normal rendering). So you have an option to turn on or off conversion.
The conversion also allows blending to read sRGB destination pixels, convert them to linear, blend with the incoming linearRGB values, and then convert them back to sRGB for writing.
Uploads to and downloads from an sRGB image will write and read the pixel values in the sRGB colorspace directly.
What about sRGB 16bit per channel images (ex. 16bit PNGs) ?
What about them? OpenGL has no 16-bit-per-channel sRGB formats.
sRGB conversion is typically done via a 256-entry table lookup. For every sRGB value, there is a pre-computed linear one.
So, just like any other case where an image format offers something that OpenGL doesn't match, you'll have to manually convert them.
Gamma is Good
I found out that storing images in sRGB is just made to compensate the opposite gamma operation done by the monitor...the fact that the human eye has a similiar non linear response is just a coincidence and has nothing to do with gamma correction ... Assuming that this understanding is right,
That understanding is not correct. Storing images with a gamma curve is necessary to increase the data density in dark areas for perceptual reasons. Without gamma, you would need 12 bits for linear for the same image fidelity you get with 8 bits and a gamma curve.
Old school TVs could have been designed to use a linearized signal, but in fact the design techniques of the day increased gamma from the theoretical 1.5 to the commonly used 2.4. As a function of system design the use of gamma reduced the perception of noise in the broadcast signal.
A gamma-type of transfer curve makes the most of a limited bandwidth by weighting the available bandwidth per human non-linear visual perception.
OPENGL
As for your question, If the internal format parameter is GL_SRGB, GL_SRGB8, GL_SRGB_ALPHA, or GL_SRGB8_ALPHA8, the texture is treated as if the red, green, or blue components are encoded in the sRGB color space.
This means that the values in the image are assumed to be gamma-encoded values as opposed to GL_RGB8 which are assumed to be linear.
However since the range of 0~255 is the same as a linear RGB texture, does this mean that to convert a linear 8bit per channel texture to sRGB the color values remain unchanged and a simple "flag" tells OpenGL: Look this 0~255 range is not linear so interpret them as a curve? What about sRGB 16bit per channel images (ex. 16bit PNGs) ?
The range or bit depth has nothing at all to do with a gamma curve being used or not.
An image needs gamma for perceptual reasons. A linear texture map or bump map does not. If you use the GL_SRGB8 tag on a linear bump map, then GL will use sRGB gamma on that linear data which you do NOT want to do - that is it will apply a power curve of approximately 2.2 to linear 1.0 values, and this is not want you want, unless your bump map IS an image with a gamma curve.
The sRGB tag is there so that when you have an sRGB image which has the color values encoded with a ~1/2.2 curve, those values become linearized.

Effect of GL_SRGB8_ALPHA8 on texture interpolation?

OpenGL allows one to declare a texture as being in sRGB (as images typically are) by using GL_SRGB8_ALPHA8, which will cause OpenGL to convert the sRGB colors to linear RGB space when sampling the texture in GLSL. This is also known as "Gamma Correction".
I've also read that linear interpolation in textures will behave differently with GL_SRGB8_ALPHA8 as interpolation will supposedly happen in linear space aswell. What effect, if any, does this have? Does this mean that one should always use GL_SRGB8_ALPHA8 for textures, rather than doing one's own sRGB -> linear conversion via GLSL?
As a side note, this is what the OpenGL 4.5 core profile specification has to say about this (quoting from section "8.24 sRGB Texture Color Conversion"):
Ideally, implementations should perform
this color conversion on each sample prior to filtering but
implementations are allowed
to perform this conversion after filtering (though this
post-filtering approach
is inferior to converting from sRGB prior to filtering).
So the spec won't guarantee you the ideal behavior.
In fact, most of the images are in sRGB space. And if you don't do any special processing when load image data into OpenGL or in the shader while rendering you'll get "wrong" image - you're applying linear computation on non linear data. It appears darker than it should be when you render such an image.
However, if you do conversion to linear space you should also convert final rendered image back to sRGB space, because usually monitors have 2.2 gamma curve applied (to be compatible with CRT output as it was before LCD screens).
So, you either do it manually in shader, or use sRGB extensions, which are provided both for textures (to convert from sRGB to linear) and frame buffers (to automatically convert back from linear to sRGB). To get correct image you should have both conversions applied.
Enabling gamma correction and doing it right gives more natural and softer image. Check out this article https://learnopengl.com/#!Advanced-Lighting/Gamma-Correction for more detailed explanation

sRGB textures. Is this correct?

I've recently been reading a little about sRGB formats and how they allow the hardware to automatically perform colour correction for typical monitors. As part of my reading, I see that you can simulate this step with an ordinary texture and a pow function on the return result.
Anyway I want to ask two questions as I've never used this feature before. Firstly, can anyone confirm from my screenshot that this is what you would expect to see? The left picture is ordinary RGBA and the right picture is with an sRGB target. There is no ambient lighting in the scene and the model is bog standard Phong (the light is a spotlight).
The second question I would like to ask is at what point is the correction actually performed by the hardware? For example I am writing frames to an FBO, then later I'm rendering a screen-sized quad to the back buffer using an FBO colour buffer (I'm intending to switch to deferred shading soon). Should I use sRGB textures attached to the FBO, or do I only need to specify an sRGB texture as the back buffer target? If you're using sRGB, should ALL texture resources be sRGB?
Note: the following discussion assumes you understand what the sRGB colorspace is, what gamma correction is, what a linear RGB colorspace is, and so forth. This focuses primarily on the OpenGL implementation of the technology.
If you want an in-depth discussion of these subjects, I would suggest looking at my tutorials on HDR/Gamma correction (to understand linear colorspaces and gamma), as well the tutorial on sRGB images and how they handle gamma correction.
Firstly, can anyone confirm from my screenshot that this is what you would expect to see?
I'm not sure I understand what you mean by that question. If you apply proper gamma correction (which is what sRGB does more or less), you will generally get more detail in darker areas of the image and a "brighter" result.
However, the correct way to think about it is that until you do proper gamma correction all of your images have been wrong. Your images have been too dark, and the gamma correction is now making them the appropriate brightness. Every decision you've made about what colors things should be and how bright lights ought to be has been wrong.
The second question I would like to ask is at what point is the correction actually performed by the hardware?
This is a very different question than the "for example" part that you continue on with covers.
sRGB images (remember: a texture contains images, but framebuffers can have images too) can be used in the following contexts:
Transferring data from the user directly to the image (for example, with glTexSubImage2D and so forth). OpenGL assumes that you are providing data that is already in the sRGB colorspace. So there is no translation of the data when you upload it. This is done because it makes the most sense: generally, any image you get from an artist will be in the sRGB colorspace unless the artist took great pains to put it in some other colorspace. Virtually every image editor works directly in sRGB.
Reading values in shaders via samplers (ie: accessing a texture). This is quite simple as well. OpenGL knows that the texel data in the image is in the sRGB colorspace. OpenGL assumes that the shader wants linear RGB color data. Therefore, all attempts to sample from a texture with an sRGB image format will result in the sRGB->lRGB conversion. Which is free, btw.
And on the plus side, if you've got GL 3.x+ capable hardware, you'll almost certainly get filtering done in the linear colorspace, where it makes sense. sRGB is a non-linear colorspace, so linear interpolation of sRGB values is always wrong.
Storing values output from the fragment shader to the framebuffer image(s). This is where it gets slightly complicated. Even if the framebuffer image you're rendering to is in the sRGB colorspace, that's not enough to force conversion. You must explicitly glEnable(GL_FRAMEBUFFER_SRGB); this tells OpenGL that the values you're writing from your fragment shader are linear colorspace values. Therefore, OpenGL needs to convert these to sRGB when storing them in the image
Again, if you've got GL 3.x+ hardware, you'll almost certainly get blending in the linear colorspace. That is, OpenGL will read the sRGB value from the framebuffer, convert it to a linear RGB value, blend it with the incoming linear RGB value (the one you wrote from your shader), convert the blended value into the sRGB colorspace and store it. Again, that's what you want; blending in the sRGB colorspace is always bad.
Now that we understand that, let's look at your example.
For example I am writing frames to an FBO, then later I'm rendering a screen-sized quad to the back buffer using an FBO colour buffer (I'm intending to switch to deferred shading soon).
The problem with this is that you're not asking the right questions. What you need to keep in mind, especially as you move into deferred rendering, is this question:
Is this linear RGB or not?
In general, you should hold off on storing any intermediate data in gamma-correct space for as long as possible. So any intermediate buffers (ie: where you accumulate your lights) should not be sRGB.
This isn't about the cost of the conversion; it's really about what you're doing. If you're doing deferred rendering, then you're probably also doing HDR lighting and so forth. So your light accumulation buffer needs to be floating-point. And float buffers are always linear; there's no reason for them to not be linear.
Your final image, the default framebuffer, must be sRGB if you want to take advantage of free gamma correction (and you do). If you do all your work in HDR float buffers, and then tone-map the result down for the final display, you should write that to an sRGB image.