When to call glEnable(GL_FRAMEBUFFER_SRGB)? - opengl

I have a rendering system where I draw to an FBO with a multisampled renderbuffer, then blit it to another FBO with a texture in order to resolve the samples in order to read off the texture to perform post-processing shading while drawing to the backbuffer (FBO index 0).
Now I'd like to get some correct sRGB output... The problem is the behavior of the program is rather inconsistent between when I run it on OS X and Windows and this also changes depending on the machine: On Windows with the Intel HD 3000 it will not apply the sRGB nonlinearity but on my other machine with a Nvidia GTX 670 it does. On the Intel HD 3000 in OS X it will also apply it.
So this probably means that I'm not setting my GL_FRAMEBUFFER_SRGB enable state at the right points in the program. However I can't seem to find any tutorials that actually tell me when I ought to enable it, they only ever mention that it's dead easy and comes at no performance cost.
I am currently not loading in any textures so I haven't had a need to deal with linearizing their colors yet.
To force the program to not simply spit back out the linear color values, what I have tried is simply comment out my glDisable(GL_FRAMEBUFFER_SRGB) line, which effectively means this setting is enabled for the entire pipeline, and I actually redundantly force it back on every frame.
I don't know if this is correct or not. It certainly does apply a nonlinearization to the colors but I can't tell if this is getting applied twice (which would be bad). It could apply the gamma as I render to my first FBO. It could do it when I blit the first FBO to the second FBO. Why not?
I've gone so far as to take screen shots of my final frame and compare raw pixel color values to the colors I set them to in the program:
I set the input color to RGB(1,2,3) and the output is RGB(13,22,28).
That seems like quite a lot of color compression at the low end and leads me to question if the gamma is getting applied multiple times.
I have just now gone through the sRGB equation and I can verify that the conversion seems to be only applied once as linear 1/255, 2/255, and 3/255 do indeed map to sRGB 13/255, 22/255, and 28/255 using the equation 1.055*C^(1/2.4)+0.055. Given that the expansion is so large for these low color values it really should be obvious if the sRGB color transform is getting applied more than once.
So, I still haven't determined what the right thing to do is. does glEnable(GL_FRAMEBUFFER_SRGB) only apply to the final framebuffer values, in which case I can just set this during my GL init routine and forget about it hereafter?

When GL_FRAMEBUFFER_SRGB is enabled, all writes to an image with an sRGB image format will assume that the input colors (the colors being written) are in a linear colorspace. Therefore, it will convert them to the sRGB colorspace.
Any writes to images that are not in the sRGB format should not be affected. So if you're writing to a floating-point image, nothing should happen. Thus, you should be able to just turn it on and leave it that way; OpenGL will know when you're rendering to an sRGB framebuffer.
In general, you want to work in a linear colorspace for as long as possible. Only your final render, after post-processing, should involve the sRGB colorspace. So your multisampled framebuffer should probably remain linear (though you should give it higher resolution for its colors to preserve accuracy. Use GL_RGB10_A2, GL_R11F_G11F_B10F, or GL_RGBA16F as a last resort).
As for this:
On Windows with the Intel HD 3000 it will not apply the sRGB nonlinearity
That is almost certainly due to Intel sucking at writing OpenGL drivers. If it's not doing the right thing when you enable GL_FRAMEBUFFER_SRGB, that's because of Intel, not your code.
Of course, it may also be that Intel's drivers didn't give you an sRGB image to begin with (if you're rendering to the default framebuffer).

Related

Forward rendering multiple rendering passes

I'm trying to implement PBR into my simple OpenGL renderer and trying to use multiple lighting passes, I'm using one pass per light for rendering as follow:
1- First pass = depth
2- Second pass = ambient
3- [3 .. n] for all the lights in the scene.
I'm using the blending function glBlendFunc(GL_ONE, GL_ONE) for passes [3..n], and i'm doing a Gamma Correction at the end of each fragment shader.
But i still have a problem with the output image it just looks noisy specially when i'm using texture maps.
Is there anything wrong with those steps or is there any improvement to this process?
So basically, what you're calculating is
f(x) = a^gamma + b^gamma + ...
However, what you actually want (and as noted by #NicolBolas in the comments already) is
g(x) = (a + b + ...)^gamma
Now f(x) and g(x) will only equal each other in the rather useless cases like gamma=1. You simply cannot additively decompose a nonlinear function like power that way.
The correct solution is to blend everything together in linear space, and doing the gamma correction afterwards, on the total sum of the linear contributions of each light source.
However, implementing this will lead to a couple of technical issues. First and foremost, the standard 8 bit per channel are just not precise enough to store linear color values. Using such a format for the accumulation step will result in strongly visible color banding artifacts. There are two approaches to solve this:
Use a higher bit-per-channel format for the accumulation framebuffer. You will need a separate gamma correction pass, so you need to set up render-to-texture via a FBO. GL_RGBA16F appears as a particularily good format for this. Since you use a PBR lighting model, you can then also work with color values outside [0,1], and instead of a simple gamma correction, apply a proper tone mapping in the final pass. Note that while you may not need an alpha chanell, still use an RGBA format here, the RGB formats are simply not required color buffer formats by the GL spec, so they may not be supported universally.
Store the data still in 8 bit-per-component format, gamma corrected. The key here is that the blending must still be done in linear space, so the destination framebuffer color values must be re-linearized prior to blending. This can be achieved by using a framebuffer with GL_SRGB8_ALPHA8 format and enabling GL_FRAMEBUFFER_SRGB. In that case, the GPU will automatically apply the standard sRGB gamma correction when writing the fragment color to the framebuffer (which currently your fragment shader does), but it will also lead to the sRGB linearization when accessing to those values, including for blending. The OpenGL 4.6 core profile spec states in section "17.3.6.1 Blend equation":
If FRAMEBUFFER_SRGB is enabled and the value of FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING for the framebuffer attachment corresponding
to the destination buffer is SRGB (see section 9.2.3), the R, G, and B destination
color values (after conversion from fixed-point to floating-point) are considered to
be encoded for the sRGB color space and hence must be linearized prior to their
use in blending. Each R, G, and B component is converted in the same fashion
described for sRGB texture components in section 8.24.
Approach 1 will be the much more general approach, while approach 2 has a couple of drawbacks:
the linearization/delinerization is done multiple times, potentially wasting some GPU processing power
due to still using only 8 bit integer, the overall quality will be lower. After each blending step, the results are rounded to the next representable number, so you will get much more quantization noise.
you are still limited to color values in [0,1] and cannot (easily) do more interesting tone mapping and HDR rendering effects
However, approach 2 also has advantages:
you do not need a separate final gamma correction pass
if your platform / window system does support sRGB framebuffers, you can directly create an sRGB pixeformat/visual for your window, and do not need any rende-to-texture step at all. Basically, requesting an sRGB framebuffer and enabling GL_FRAMEBUFFER_SRGB will be enough to make this work.

Effect of GL_SRGB8_ALPHA8 on texture interpolation?

OpenGL allows one to declare a texture as being in sRGB (as images typically are) by using GL_SRGB8_ALPHA8, which will cause OpenGL to convert the sRGB colors to linear RGB space when sampling the texture in GLSL. This is also known as "Gamma Correction".
I've also read that linear interpolation in textures will behave differently with GL_SRGB8_ALPHA8 as interpolation will supposedly happen in linear space aswell. What effect, if any, does this have? Does this mean that one should always use GL_SRGB8_ALPHA8 for textures, rather than doing one's own sRGB -> linear conversion via GLSL?
As a side note, this is what the OpenGL 4.5 core profile specification has to say about this (quoting from section "8.24 sRGB Texture Color Conversion"):
Ideally, implementations should perform
this color conversion on each sample prior to filtering but
implementations are allowed
to perform this conversion after filtering (though this
post-filtering approach
is inferior to converting from sRGB prior to filtering).
So the spec won't guarantee you the ideal behavior.
In fact, most of the images are in sRGB space. And if you don't do any special processing when load image data into OpenGL or in the shader while rendering you'll get "wrong" image - you're applying linear computation on non linear data. It appears darker than it should be when you render such an image.
However, if you do conversion to linear space you should also convert final rendered image back to sRGB space, because usually monitors have 2.2 gamma curve applied (to be compatible with CRT output as it was before LCD screens).
So, you either do it manually in shader, or use sRGB extensions, which are provided both for textures (to convert from sRGB to linear) and frame buffers (to automatically convert back from linear to sRGB). To get correct image you should have both conversions applied.
Enabling gamma correction and doing it right gives more natural and softer image. Check out this article https://learnopengl.com/#!Advanced-Lighting/Gamma-Correction for more detailed explanation

Open gl es - How to improve performance, render to texture, blending

I am here because I'm working on an OpenGL program and I have some issues with performance. I work with OpenGL ES 3.0 on iMX6 soc.
Here is my algorithm :
I get an image from camera which is directly map to a texture.
Using an FBO, I render to texture to map the image on a specific form.
I do the same thing (with a second FBO) for another image which is sent via shared memory by another application. This step is performed only if the image is updated. Only once per second.
I blend these two textures in the default frame buffer to render the result to the screen.
If I perform these three steps separately, It works well and the screen is updated at 30FPS. But when I include the three step in one program the render is very slow and I got only 0.5FPS.
I am wondering if the GPU on the iMX6 is enough powerful, but I think it is not a complex algorithm. I think I am doing something in the wrong way, but what?
I use 3 different frame buffers, so is that a good way or should I use only one?
Can someone give me answer, clues, anything that can help me? :-)
My images dimensions are 1280x1024 x RGBA. Then I am doing some conversion from floating-point texture to integer and back to float, this is done to perform bitwise operation on pixels.
Thanks to #Columbo the problem came from all the conversion, I work with floating-point texture and only for the bitwise operations I do the conversion which improve a lot the performance of the algorithm.
Another point which decrease the performance was the texture format. For the first step, the image was 1280x1024 but only on one composent (grayscale image). To keep only the grayscale composant and not to use too much memory I worked with a GL_RED texture but this wasn't a good idea because when I changed it to GL_RGB, I double the framerate of the render too.

sRGB textures. Is this correct?

I've recently been reading a little about sRGB formats and how they allow the hardware to automatically perform colour correction for typical monitors. As part of my reading, I see that you can simulate this step with an ordinary texture and a pow function on the return result.
Anyway I want to ask two questions as I've never used this feature before. Firstly, can anyone confirm from my screenshot that this is what you would expect to see? The left picture is ordinary RGBA and the right picture is with an sRGB target. There is no ambient lighting in the scene and the model is bog standard Phong (the light is a spotlight).
The second question I would like to ask is at what point is the correction actually performed by the hardware? For example I am writing frames to an FBO, then later I'm rendering a screen-sized quad to the back buffer using an FBO colour buffer (I'm intending to switch to deferred shading soon). Should I use sRGB textures attached to the FBO, or do I only need to specify an sRGB texture as the back buffer target? If you're using sRGB, should ALL texture resources be sRGB?
Note: the following discussion assumes you understand what the sRGB colorspace is, what gamma correction is, what a linear RGB colorspace is, and so forth. This focuses primarily on the OpenGL implementation of the technology.
If you want an in-depth discussion of these subjects, I would suggest looking at my tutorials on HDR/Gamma correction (to understand linear colorspaces and gamma), as well the tutorial on sRGB images and how they handle gamma correction.
Firstly, can anyone confirm from my screenshot that this is what you would expect to see?
I'm not sure I understand what you mean by that question. If you apply proper gamma correction (which is what sRGB does more or less), you will generally get more detail in darker areas of the image and a "brighter" result.
However, the correct way to think about it is that until you do proper gamma correction all of your images have been wrong. Your images have been too dark, and the gamma correction is now making them the appropriate brightness. Every decision you've made about what colors things should be and how bright lights ought to be has been wrong.
The second question I would like to ask is at what point is the correction actually performed by the hardware?
This is a very different question than the "for example" part that you continue on with covers.
sRGB images (remember: a texture contains images, but framebuffers can have images too) can be used in the following contexts:
Transferring data from the user directly to the image (for example, with glTexSubImage2D and so forth). OpenGL assumes that you are providing data that is already in the sRGB colorspace. So there is no translation of the data when you upload it. This is done because it makes the most sense: generally, any image you get from an artist will be in the sRGB colorspace unless the artist took great pains to put it in some other colorspace. Virtually every image editor works directly in sRGB.
Reading values in shaders via samplers (ie: accessing a texture). This is quite simple as well. OpenGL knows that the texel data in the image is in the sRGB colorspace. OpenGL assumes that the shader wants linear RGB color data. Therefore, all attempts to sample from a texture with an sRGB image format will result in the sRGB->lRGB conversion. Which is free, btw.
And on the plus side, if you've got GL 3.x+ capable hardware, you'll almost certainly get filtering done in the linear colorspace, where it makes sense. sRGB is a non-linear colorspace, so linear interpolation of sRGB values is always wrong.
Storing values output from the fragment shader to the framebuffer image(s). This is where it gets slightly complicated. Even if the framebuffer image you're rendering to is in the sRGB colorspace, that's not enough to force conversion. You must explicitly glEnable(GL_FRAMEBUFFER_SRGB); this tells OpenGL that the values you're writing from your fragment shader are linear colorspace values. Therefore, OpenGL needs to convert these to sRGB when storing them in the image
Again, if you've got GL 3.x+ hardware, you'll almost certainly get blending in the linear colorspace. That is, OpenGL will read the sRGB value from the framebuffer, convert it to a linear RGB value, blend it with the incoming linear RGB value (the one you wrote from your shader), convert the blended value into the sRGB colorspace and store it. Again, that's what you want; blending in the sRGB colorspace is always bad.
Now that we understand that, let's look at your example.
For example I am writing frames to an FBO, then later I'm rendering a screen-sized quad to the back buffer using an FBO colour buffer (I'm intending to switch to deferred shading soon).
The problem with this is that you're not asking the right questions. What you need to keep in mind, especially as you move into deferred rendering, is this question:
Is this linear RGB or not?
In general, you should hold off on storing any intermediate data in gamma-correct space for as long as possible. So any intermediate buffers (ie: where you accumulate your lights) should not be sRGB.
This isn't about the cost of the conversion; it's really about what you're doing. If you're doing deferred rendering, then you're probably also doing HDR lighting and so forth. So your light accumulation buffer needs to be floating-point. And float buffers are always linear; there's no reason for them to not be linear.
Your final image, the default framebuffer, must be sRGB if you want to take advantage of free gamma correction (and you do). If you do all your work in HDR float buffers, and then tone-map the result down for the final display, you should write that to an sRGB image.

sRGB FBO render to texture

In my renderer, I produce an anti-aliased scene on a multisampled FBO, which is blitted to an FBO whose color attachment is a texture. The texture is then read during rendering to the framebuffer.
I'd like to update it so that I get gamma-correct results. The benefit of using an sRGB framebuffer is that it allows me to have a somewhat better color precision by storing nonlinear sRGB values directly in the framebuffer.
What I'm not sure about is what changes should I be making to get this, and what is being changed by the different settings.
It looks like extension ARB_framebuffer_sRGB is just dealing with reading and blending operations with sRGB framebuffers. In my situation I'll need to use a texture specifying a sRGB representation type, which means I'd be using extension EXT_texture_sRGB... using a linear texture format would disable the sRGB translation.
Edit: But I just saw this:
3) Should the ability to support sRGB framebuffer update and blending
be an attribute of the framebuffer?
RESOLVED: Yes. It should be a capability of some pixel formats
(mostly likely just RGB8 and RGBA8) that says sRGB blending can
be enabled.
This allows an implementation to simply mark the existing RGB8
and RGBA8 pixel formats as supporting sRGB blending and then
just provide the functionality for sRGB update and blending for
such formats.
Now I'm not so sure what to specify for my texture's pixel format.
Okay, and what about renderbuffers? the ARB_framebuffer_sRGB doc does not mention anything about renderbuffers. Is it possible to use glRenderbufferStorageMultisample with a sRGB format, so I can get sRGB storage enabled blending?
Also, what is the difference between GL_SRGB_ALPHA and GL_SRGB8_ALPHA8 when specifying the internal format for glTexImage2D?
What I'm not sure about is what changes should I be making to get this
That's because your question seems unsure about what you're trying to do.
The key to all of this stuff is to at all times know what your input data is and what your output data is.
Your first step is to know what is stored in each of your textures. Does a particular texture store linear data or data in the sRGB colorspace? If it stores linear data, then use one of the linear image formats. If it stores sRGB colorspace data, then use one of the sRGB image formats.
This ensures that you are fetching the data you want in your shaders. When it comes time to write/blend them to the framebuffer, you now need to decide how to handle that.
Your screen expects values that have been pre-gamma corrected to the gamma of the display device. As such, if you provide linear values, you will get incorrect color output.
However, sometimes, you want to write to intermediate values. For example, if you're doing forward or deferred rendering, you will write accumulated lighting to a floating-point buffer, then use HDR tone mapping to boil it down to a [0, 1] image for display. Post-processing techniques can again be used. Only the outputs to [0, 1] need to be to images in the sRGB colorspace.
When writing linear RGB values that you want converted into sRGB, you must enable GL_FRAMEBUFFER_SRGB. This is a special enable (note that textures don't have a way to turn off sRGB decoding) because sometimes, you want to write values that already are in sRGB. This is often the case for GUI interface widgets, which were designed and built using colors already in the sRGB colorspace.
I cover issues relating to writing gamma-correct values and reading them from textures in my tutorial series. The first one explains why gamma is important and does explicitly gamma correction in the shader. The second link covers how to use sRGB images, both in textures and framebuffers.
Okay, and what about renderbuffers? the ARB_framebuffer_sRGB doc does not mention anything about renderbuffers.
And why would it? ARB_framebuffer_sRGB is only interested in the framebuffer and the nature of images in it. It neither knows nor cares where those images come from. It doesn't care if it's talking about the default framebuffer, a texture attached to an FBO, a renderbuffer attached to an FBO, or something entirely new someone comes up with tomorrow.
The extension states what happens when the destination image is in the sRGB colorspace and when GL_FRAMEBUFFER_SRGB is enabled. Where that image comes from is up to you.
Also, what is the difference between GL_SRGB_ALPHA and GL_SRGB8_ALPHA8 when specifying the internal format for glTexImage2D?
One is sized. The other is not. In theory, GL_SRGB_ALPHA could give you any bitdepth the implementation wanted. It could give you 2 bits per component. You're giving the implementation freedom to pick what it wants.
In practice, I doubt you'll find a difference. That being said, always used sized internal formats whenever possible. It's good to be specific about what you want, and to prevent the implementation from doing something stupid. OpenGL even has some sized formats which are required to be supported explicitly as stated.