OpenGL sRGB framebuffer oddity - opengl

I'm using GLFW3 to create a context and I've noticed that the GLFW_SRGB_CAPABLE property seems to have no effect. Regardless of what I set it to, I always get sRGB conversion when GL_FRAMEBUFFER_SRGB is enabled. My understanding is that when GL_FRAMEBUFFER_SRGB is enabled, you get sRGB conversion only if the framebuffer is an sRGB format. To add to the confusion, if I check the GL_FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING I get GL_LINEAR regardless of what I set GLFW_SRGB_CAPABLE to. This doesn't appear to be an issue with GLFW. I created a window and context manually and was sure to set GL_FRAMEBUFFER_SRGB_CAPABLE_ARB to true.
I'm using a Nvidia GTX 760 with the 340.76 drivers. I'm checking the format like this:
glGetFramebufferAttachmentParameteriv(GL_FRAMEBUFFER, GL_FRONT_LEFT, GL_FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING, &enc);
This should return GL_SRGB, should it not? If it is applying sRGB correction regardless of what WGL_FRAMEBUFFER_SRGB_CAPABLE_ARB is set to, then is Nvidia's driver not broken? Nobody has noticed this until now?

It seems that this is only an issue with the default framebuffer, therefor it must be a bug in Nvidia's WGL implementation. I've pointed it out to them so hopefully it will be fixed.

With GLX (Linux), I experience the same behaviour as well. It will report linear despite it clearly rendering as sRGB. One way you can verify that it is in fact working is by using an sRGB texture with texel value 1, render this texture to your sRGB framebuffer and see that it shows a dark-grey square. (For comparison, you can see how it looks when the texture is not an sRGB texture - still with texel value 1, that should give a lighter-grey square).
You can see this example: https://github.com/julienaubert/glsrgb
Interestingly, with an OpenGL ES context, the (almost) same code does not render correctly.
There is a topic on nvidia's developer OpenGL forum:
https://devtalk.nvidia.com/default/topic/776591/opengl/gl_framebuffer_srgb-functions-incorrectly/

Related

Why doesn't binding to the default texture with GL_REPLACE actually replace the color drawn?

When GL_TEXTURE_ENV_MODE is set to GL_REPLACE, drawing a vertex will use the color of the current texture coordinate in the currently bound texture.
I found out that binding to the default texture with glBindTexture(GL_TEXTURE_2D, 0) has the same effect as calling glDisable(GL_TEXTURE_2D).
However a Stackoverflow answer here quotes the OpenGL spec that "(3.9.2.Texture Access) sampling from an incomplete texture will return (0,0,0,1)", where the default texture is initially an incomplete texture.
So I think I should always get black objects drawn in the given case, but in practice it behaves as if texturing is disabled.
Why doesn't binding to the default texture with GL_REPLACE actually replace the color drawn?
First of all, the "default texture object" only exists in compatibility profile (or legacy GL contexts < 3.2).
It now depends whether you sample from a texture in a shader or use the legacy fixed function pipeline. And since you care about the texuture environment, it means you are using the fixed function pipeline. And for that, you'll have to look up the compatibility profile specification too. The GL 4.6 compatibility profile spec clearly states in section 8.17.2 "Effects of Completeness on Texture Application":
For fixed-function texture access, if texturing is enabled for a
texture unit at the time a primitive is rasterized, and if the texture
image bound to the enabled texture target is not complete, then it is
as if texture mapping were disabled for that texture unit.
This behavior has been that way since the very beginning of OpenGL, and it can't be changed in compatibility profiles as existing applications may depend on it. However, using the fixed function pipeline in 2022 is something you should do only with a very good reason.

OpenGL blending not working with DIB section

I’m trying to render to a DIB section with blending using OpenGL on XP. I’m trying to multiply the source and destination colour components together, as in:
glEnable(GL_BLEND);
glBlendFunc(GL_DST_COLOR, GL_ZERO);
However, it fails to draw a blended image. By changing the type of blending I ask for, I can make it draw as if without blending, or not draw at all. But it refuses to blend.
Here are details about the OpenGL version I’m using:
Vendor: Microsoft Corporation
Renderer: GDI Generic
Version: 1.1.0
Extensions: GL_WIN_swap_hint GL_EXT_bgra GL_EXT_paletted_texture
I was aware that I’m limited to “generic” (software) rendering with DIB sections, but I did not expect blending to fail. I have searched for confirmation about whether blending is or is not supported in such cases, but to no avail.
glBlendFunc(GL_DST_COLOR, GL_ZERO);
^^^ oh?
Transparency, Translucency, and Blending:
15.060 I want to use blending but can’t get destination alpha to work. Can I blend or create a transparency effect without destination alpha?
Many OpenGL devices don't support destination alpha. In particular, the OpenGL 1.1 software rendering libraries from Microsoft don't support it. The OpenGL specification doesn't require it.
Also:
No Alpha in the Framebuffer:
If you are doing Blending and you need a destination alpha, you need to make sure that your render target has one. This is easy to ensure when rendering to a Framebuffer Object. But with a Default Framebuffer, it depends on how you created your OpenGL Context.
For example, if you are using GLUT, you need to make sure you pass GLUT_ALPHA to the glutInitDisplayMode function.
Ok I made a silly mistake: I misread my own script code and ended up applying the texture in the wrong rendering pass. OpenGL wasn’t to blame, and blending DOES work.

OpenGL Framebuffer size limit is 0

I use OpenGL FrameBuffer Objects (FBO) to render to textures using either GL_ARB_FRAMEBUFFER_OBJECT or GL_EXT_FRAMEBUFFER_OBJECT extentions.
However, there are significant number of videocards (mostly Intel, with OGL 2.0 and even 3.0) supporting GL_ARB_FRAMEBUFFER_OBJECT but having GL_MAX_FRAMEBUFFER_WIDTH=0 and GL_MAX_FRAMEBUFFER_HEIGHT=0, so it fails when I try to attach a texture to FBO.
Does it really mean that FBO can't be used for rendering to textures on these videocards? Is there a workaround?
Render to texture is a very important rendering technique, and it works well with Direct3D everywhere, so there should be a way to use it using OpenGL too.
However, there are significant number of videocards (mostly Intel, with OGL 2.0 and even 3.0) supporting GL_ARB_FRAMEBUFFER_OBJECT but having GL_MAX_FRAMEBUFFER_WIDTH=0 and GL_MAX_FRAMEBUFFER_HEIGHT=0, so it fails when I try to attach a texture to FBO.
Neither GL_ARB_framebuffer_object nor GL_EXT_framebuffer_object do define GL_MAX_FRAMEBUFFER_WIDTH or GL_MAX_FRAMEBUFFER_HEIGHT.
These enums were actually added in GL_ARB_framebuffer_no_attachments (core since OpenGL 4.3), so it is no wonder that some Intel GPUs don't support these. (If you'd check for errors, you would have noticed some GL_INVALID_ENUM from your glGets - so it is not returning zero, it is erroring out on the get, leaving the contents of your variable as it was before). However, the main point is that these limits are only relevant for a framebuffer without any attachment, so you are querying the wrong property here.
There is no explicit size limit for the framebuffer, but there are size limits for each attachment type. Renderbuffers can be at most GL_MAX_RENDERBUFFER_SIZE in any dimension, and 2D tetxures at most GL_TEXTURE_SIZE. If you want to render to the target in a single pass, you might also want to care about the GL_MAX_VIEWPORT_DIMS.

Creating frame buffer object with no color attachment

I know that we can create such a FBO (no color attachment, just a depth attachment) e.g this can be used for shadow mapping.
Also, the FBO completeness check states that
Each draw buffers must either specify color attachment points that
have images attached or must be GL_NONE​.
(GL_FRAMEBUFFER_INCOMPLETE_DRAW_BUFFER​ when false). Note that this
test is not performed if OpenGL 4.2 or ARB_ES2_compatibility is
available.
My question is, is it necessary to explicitly mention this by using
glDrawBuffer(GL_NONE);
If I dont specify any color attachments, is it not understood by OpenGL that it will not have any color buffers attached?
(My Program just worked fine on OpenGL 4.0 without mentioning "glDrawBuffer(GL_NONE);" so I assume its okay not to, but wiki says FB completeness check must have failed)
In my application, using a depth buffer for shadow mapping, NOT calling
glDrawBuffer(GL_NONE);
does NOT result in an incomplete framebuffer if this framebuffer has no color attachments.
However, everything does turn into crap and the depth texture is apparently either not writeable or readable or both.
Why this is so, and whether this is universally so I will leave in the middle. I am just informing you of my findings, and my findings indicate you should be cautious with omitting this statement.

When to call glEnable(GL_FRAMEBUFFER_SRGB)?

I have a rendering system where I draw to an FBO with a multisampled renderbuffer, then blit it to another FBO with a texture in order to resolve the samples in order to read off the texture to perform post-processing shading while drawing to the backbuffer (FBO index 0).
Now I'd like to get some correct sRGB output... The problem is the behavior of the program is rather inconsistent between when I run it on OS X and Windows and this also changes depending on the machine: On Windows with the Intel HD 3000 it will not apply the sRGB nonlinearity but on my other machine with a Nvidia GTX 670 it does. On the Intel HD 3000 in OS X it will also apply it.
So this probably means that I'm not setting my GL_FRAMEBUFFER_SRGB enable state at the right points in the program. However I can't seem to find any tutorials that actually tell me when I ought to enable it, they only ever mention that it's dead easy and comes at no performance cost.
I am currently not loading in any textures so I haven't had a need to deal with linearizing their colors yet.
To force the program to not simply spit back out the linear color values, what I have tried is simply comment out my glDisable(GL_FRAMEBUFFER_SRGB) line, which effectively means this setting is enabled for the entire pipeline, and I actually redundantly force it back on every frame.
I don't know if this is correct or not. It certainly does apply a nonlinearization to the colors but I can't tell if this is getting applied twice (which would be bad). It could apply the gamma as I render to my first FBO. It could do it when I blit the first FBO to the second FBO. Why not?
I've gone so far as to take screen shots of my final frame and compare raw pixel color values to the colors I set them to in the program:
I set the input color to RGB(1,2,3) and the output is RGB(13,22,28).
That seems like quite a lot of color compression at the low end and leads me to question if the gamma is getting applied multiple times.
I have just now gone through the sRGB equation and I can verify that the conversion seems to be only applied once as linear 1/255, 2/255, and 3/255 do indeed map to sRGB 13/255, 22/255, and 28/255 using the equation 1.055*C^(1/2.4)+0.055. Given that the expansion is so large for these low color values it really should be obvious if the sRGB color transform is getting applied more than once.
So, I still haven't determined what the right thing to do is. does glEnable(GL_FRAMEBUFFER_SRGB) only apply to the final framebuffer values, in which case I can just set this during my GL init routine and forget about it hereafter?
When GL_FRAMEBUFFER_SRGB is enabled, all writes to an image with an sRGB image format will assume that the input colors (the colors being written) are in a linear colorspace. Therefore, it will convert them to the sRGB colorspace.
Any writes to images that are not in the sRGB format should not be affected. So if you're writing to a floating-point image, nothing should happen. Thus, you should be able to just turn it on and leave it that way; OpenGL will know when you're rendering to an sRGB framebuffer.
In general, you want to work in a linear colorspace for as long as possible. Only your final render, after post-processing, should involve the sRGB colorspace. So your multisampled framebuffer should probably remain linear (though you should give it higher resolution for its colors to preserve accuracy. Use GL_RGB10_A2, GL_R11F_G11F_B10F, or GL_RGBA16F as a last resort).
As for this:
On Windows with the Intel HD 3000 it will not apply the sRGB nonlinearity
That is almost certainly due to Intel sucking at writing OpenGL drivers. If it's not doing the right thing when you enable GL_FRAMEBUFFER_SRGB, that's because of Intel, not your code.
Of course, it may also be that Intel's drivers didn't give you an sRGB image to begin with (if you're rendering to the default framebuffer).