I'm using SDL2 on Windows 10 to create an OpenGL context, but when I try to get the framebuffer attachment color encoding on an Intel UHD 630, I get an Invalid Operation error instead. On my Nvidia Geforce 1070, it's returning the correct value (GL_LINEAR).
According to Khronos, my code should work:
checkGlErrors(); // no error
auto params = GLint{ 0 };
glGetFramebufferAttachmentParameteriv(GL_DRAW_FRAMEBUFFER,
GL_BACK,
GL_FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING,
¶ms);
checkGlErrors(); // invalid operation
What I'm reading is that Intel drivers are notoriously unreliable, but I'll be surprised if they don't support an SRGB framebuffer. For context: I'm using GL_FRAMEBUFFER_SRGB for gamma correction, and this too doesn't work on my integrated GPU even though it works perfectly on my Nvidia GPU.
Am I doing something wrong? Is there a reliable way of enabling SRGB on the default framebuffer? My drivers are up to date (27.10.100.8425). The output is gamma-corrected on my Geforce GPU but the integrated Intel GPU is rendering the default framebuffer without gamma correction, so I'm assuming there's something unique about the default framebuffer in the Intel drivers that I don't know.
Edit: The correct value should be GL_SRGB, not GL_LINEAR. The correct code for getting that parameter:
// default framebuffer, using glGetFramebuffer
glGetFramebufferAttachmentParameteriv(GL_DRAW_FRAMEBUFFER, GL_FRONT_LEFT, GL_FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING, ¶ms);
// default framebuffer, using glGetNamedFramebuffer
glGetNamedFramebufferAttachmentParameteriv(GL_ZERO, GL_FRONT_LEFT, GL_FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING, ¶ms);
// named framebuffer
glGetNamedFramebufferAttachmentParameteriv(namedFramebuffer.getId(), GL_COLOR_ATTACHMENT0, GL_FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING, ¶ms);
It looks like it's just a driver bug. 100.8425 (the current beta driver) works well except for the SRGB issue; the current stable driver, along with several other newer drivers, all display flickering green, horizontal lines in the window. I reverted to 100.8190 and now the window is rendering correctly--identical to the Geforce GPU.
This sums up the situation: "Intel drivers suck, don't trust anything they do."
Related
I am rendering 16bit grayscale images using OpenGL and need to fetch rendered data to buffer without bit depth decimation. My code works perfectly on Intel(R) HD Graphics 4000, but on nVidia graphic cards (NVIDIA Corporation GeForce G 105M/PCIe/SSE2, NVIDIA Corporation GeForce GT 640M LE/PCIe/SSE2) it fails with status GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT when I try to render to texture with internal format GL_R16_SNORM. It works when I use GL_R16 format, but I also loose all negative values of rendered texture. With Intel HD Graphics it works with both GL_R16_SNORM and GL_16 values.
glTexImage2D(GL_TEXTURE_2D, 0, GL_R16_SNORM, width, height, 0, GL_RED, GL_SHORT, null);
glBindFramebuffer(GL_FRAMEBUFFER, frameBufferId);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureId, 0);
glDrawBuffers(1, new int[] {GL_COLOR_ATTACHMENT0}, 0);
int status = gl.glCheckFramebufferStatus(GL.GL_FRAMEBUFFER);
What is reason of GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT status on nVidia cards? Can you advise how to debug this situation? (unfortunately glGetError()==GL_NO_ERROR)
EDIT: After further testing I found out it work also on ATI graphics cards. As Reto Koradi answered, support in OpenGL version pre 4.4 is manufacturer specific and it seems it works on ATI and Intel cards but not on nVidia cards.
GL_R16_SNORM only became a required color-renderable format with OpenGL 4.4.
If you look at the big texture format table for example in the 4.3 spec (starting on page 179), the "color-renderable" field is checked for R16_SNORM. But then on page 178, under "Required Texture Formats", R16_SNORM is listed as a "texture-only color format". This means that while implementations can support rendering to this format, it is not required.
In the 4.4 spec, the table (starting on page 188) basically says the same. R16_SNORM has a checkmark in the column "CR", which means "color-renderable". But the rules were changed. In 4.4 and later, all color-renderable formats have to be supported by implementations. Instead of containing lists, the "Required Texture Formats" section on page 186 now simply refers to the table. This is also mentioned in the change log section of the spec.
This means that unless you require at least OpenGL 4.4, support for rendering to this format is indeed vendor/hardware dependent.
I'm using GLFW3 to create a context and I've noticed that the GLFW_SRGB_CAPABLE property seems to have no effect. Regardless of what I set it to, I always get sRGB conversion when GL_FRAMEBUFFER_SRGB is enabled. My understanding is that when GL_FRAMEBUFFER_SRGB is enabled, you get sRGB conversion only if the framebuffer is an sRGB format. To add to the confusion, if I check the GL_FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING I get GL_LINEAR regardless of what I set GLFW_SRGB_CAPABLE to. This doesn't appear to be an issue with GLFW. I created a window and context manually and was sure to set GL_FRAMEBUFFER_SRGB_CAPABLE_ARB to true.
I'm using a Nvidia GTX 760 with the 340.76 drivers. I'm checking the format like this:
glGetFramebufferAttachmentParameteriv(GL_FRAMEBUFFER, GL_FRONT_LEFT, GL_FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING, &enc);
This should return GL_SRGB, should it not? If it is applying sRGB correction regardless of what WGL_FRAMEBUFFER_SRGB_CAPABLE_ARB is set to, then is Nvidia's driver not broken? Nobody has noticed this until now?
It seems that this is only an issue with the default framebuffer, therefor it must be a bug in Nvidia's WGL implementation. I've pointed it out to them so hopefully it will be fixed.
With GLX (Linux), I experience the same behaviour as well. It will report linear despite it clearly rendering as sRGB. One way you can verify that it is in fact working is by using an sRGB texture with texel value 1, render this texture to your sRGB framebuffer and see that it shows a dark-grey square. (For comparison, you can see how it looks when the texture is not an sRGB texture - still with texel value 1, that should give a lighter-grey square).
You can see this example: https://github.com/julienaubert/glsrgb
Interestingly, with an OpenGL ES context, the (almost) same code does not render correctly.
There is a topic on nvidia's developer OpenGL forum:
https://devtalk.nvidia.com/default/topic/776591/opengl/gl_framebuffer_srgb-functions-incorrectly/
Now, this is an extremely odd behavior.
TL;DR -- in a render-to-texture setup, upon resizing the window (framebuffer 0) only the very next call to glClear(GL_COLOR_BUFFER_BIT) for bound framebuffer 0 (the window's client area) gives GL_OUT_OF_MEMORY, only on one of two GPUs, however rendering still proceeds properly and correctly.
Now, all the gritty details:
So this is on a Vaio Z with two GPUs (that can be switched-to with a physical toggle button on the machine):
OpenGL 4.2.0 # NVIDIA Corporation GeForce GT 640M LE/PCIe/SSE2 (GLSL: 4.20 NVIDIA via Cg compiler)
OpenGL 4.0.0 - Build 9.17.10.2867 # Intel Intel(R) HD Graphics 4000 (GLSL: 4.00 - Build 9.17.10.2867)
My program is in Go 1.0.3 64-bit under Win 7 64-bit using GLFW 64-bit.
I have a fairly simple and straightforward render-to-texture "mini pipeline". First, normal 3D geometry is rendered with the simplest of shaders (no lighting, nothing, just textured triangle meshes which are just a number of cubes and planes) to a framebuffer that has both a depth/stencil renderbuffer as depth/stencil attachment and a texture2D as color attachment. For the texture all filtering is disabled as are mip-maps.
Then I render a full-screen quad (a single "oversized" full-screen tri actually) just sampling from said framebuffer texture (color attachment) with texelFetch(tex, gl_FragCoord.xy, 0) so no wrapping is used.
Both GPUs render this just fine, both when I force a core profile and when I don't. No GL errors are ever reported for this, all renders as expected too. Except when I resize the window while using the Intel HD 4000 GPU's GL 4.0 renderer (both in Core profile and Comp profile). Only in that case, a single resize will record a GL_OUT_OF_MEMORY error directly after the very next glClear(GL_COLOR_BUFFER_BIT) call on framebuffer 0 (the screen), but only once after the resize, not in every subsequent loop iteration.
Interestingly, I don't even actually do any allocations on resize! I have temporarily disabled ALL logic occuring on window resize -- that is, right now I simply fully ignore the window-resize event, meaning the RTT framebuffer and its depth and color attachment resolutions are not even changed/recreated. Meaning the next glViewPort will still use the same dimensions as when the window and GL context was first created, but anyhoo the error occurs on glClear() (not before, only after, only once -- I've double- and triple-checked).
Would this be a driver bug, or is there anything I could be doing wrongly here?
Interesting glitch in the HD's GL driver, it seems:
When I switched to the render-to-texture setup, I also set the depth/stencil bits at GL context creation for the primary framebuffer 0 (ie. the screen/window) both to 0. This is when I started seeing this error and framerate became quite sluggish compared to before.
I experimentally set the (strictly speaking unneeded) depth-bits to 8 and both of these issues are gone. So seems like the current HD 4000 GL 4.0 driver version just doesn't like a value of 0 for its depth buffer bits at GL context creation.
I writing a C++ simple application, to figure out why i can't display any NPOT texture using "GL_Texture_2D" as the texture_target. At some point, I'll need to generate some Mipmaps, so
using rectangle texture isn't an option.
I'm running Win7 Pro (x64), Intel i7-2600K CPU, 8GB ram, NVIDIA Quadro 600. My quadro's driver is 296.35, which support OpenGL 4.2.
1 Working version
When using the "default" texture, it runs smoothly and display any NPOT texture.
glBindTexture( target = GL_TEXTURE_2D, texture = 0 )
2 Broken version
Since, I'll need more that 1 texture, I need to get their "name".
As soon has we're trying to bind a named texture, calling glGenTexture and using that that texture into the glBindTexture, all I get is a white rectangle.
glGenTextures(n = 1, textures = &2)
glBindTexture(target = GL_TEXTURE_2D, texture = 2)
I'm sure that my hardware supports NPOT texture, I've checked openGL extensions and "GL_ARB_texture_non_power_of_two" is listed. Also, I'm using and instead of the typical windows version of .
I've also asked this question on the nvidia forums and in this post you'll find both log files
GL_TEXTURE_MIN_FILTER defaults to GL_NEAREST_MIPMAP_LINEAR.
Upload mipmaps or switch to GL_NEAREST/GL_LINEAR.
Apple's OpenGL Shader Builder let's you drop in your vertex (or fragment) shader and it will link and validate it then tell you which GL_RENDERER is used for that shader. For me it either shows: Apple Software Renderer (in red because it means the shader will be dog slow) or AMD Radeon HD 6970M OpenGL Engine (i.e. my gpu's renderer which I usually want to run the shader).
How can I also determine this at runtime in my own software?
Edit:
Querying GL_RENDERER in my CPU code always seems to return AMD Radeon HD 6970M OpenGL Engine regardless of where I place it in the draw loop even though I'm using a shader that OpenGL Shader Builder says is running on Apple Software Renderer (and I believe it because it's very slow). Is it a matter of querying GL_RENDERER at just the right time? If so, when?
The renderer used is tied to the OpenGL context and a proper OpenGL implementation should not switch the renderer inbetween. Of course a OpenGL implementation may be built on some infrastructure that dynamically switches between backend renderers, but this must then reflect to the frontend context in renderer string that identifies this.
So what you do is indeed the correct method.