Known bugs in OpenGL 3, OpenGL 4 implementations - opengl

As we all get to know eventually, the specification is one thing and the implementation is another. Most of bugs we cause ourselves, but sometimes that's not the case.
I believe it'd be useful to make a small list of:
What are the currently known bugs in the GPU drivers, related to the implementation of recent versions of OpenGL and GLSL?
Please remember to always post the relevant graphics card and driver version.

Let me start:
GPU: confirmed on AMD/ATI Radeon HD 4650
Type: GLSL problem
GL version related: confirmed on 3.3, probably 3.1 and up (or even before)
Relevant link: http://forums.amd.com/devforum/messageview.cfm?catid=392&threadid=139288
Driver version: confirmed on Catalyst 10.10 (9-28-2010)
Status: as of 2010-11-27 it has a fix, but it aparrently didn't reach the public driver release yet (so even if the fix gets released, users with not-so-recent version of drivers will still be affected for like months)
Description:
If in your vertex shader you have any attribute (in) variable whose name is lexically after gl_, then you cannot use built-in attributes, namely gl_VertexID and gl_InstanceID. If you try, the shader won't work (blank screen, likely).
Workaround (new):
Only available with GLSL 3.3 and up, or with the GL_ARB_explicit_attrib_location extension.
Define any attribute's location explicitly to be equal to 0, by appending layout(location=0) to its declaration in the vertex shader. You may, but don't need to use this for other attributes; the important thing is that ANY attribute needs to have location equal to 0. After you do that, the naming is no longer important.
Workaround (alternative):
Use a name convention which requires you to name your attribute variables starting with a_, which won't hurt your code readability and will make all of them be lexically before gl_ (safe zone).

Another gl_VertexID bug:
GPU: NVIDIA GeForce 9400M
Type: GLSL problem
Driver version: NVDANV50Hal 1.6.36
OpenGL version: 2.1, GLSL 1.2 using GL_EXT_gpu_shader4 extension
This occurs on Macbooks. It's possible that the new driver enabling OpenGL 3.2 that comes with OS X Lion has fixed the issue but a lot of frameworks are only configured to use the legacy 2.1 drivers so this is still relevant.
If you read gl_VertexID before you read another attribute in a vertex shader, the latter attribute will return junk data. If the other attribute is gl_Color, regardless of how it is used, nothing will be rendered. Accessing other built-in attributes can lead to other strange behavior.
Workaround:
If you must use gl_VertexID, read all other attributes you will need first. If you read another attribute first, followed by gl_VertexID, any subsequent reads of the attribute will work fine.

Related

Vulkan: Geometry Shader Validation incorrect?

I am currently using a NVIDIA GeForce GTX 780 (from Gigabyte if that matters - I don't know how much this could be affected by the onboard BIOS, I've also got two of them Installed but due to Vulkans incapeability of SLI I only use one device at a time in my code. However in the NVIDIA control center SLI is activated. I use the official Driver version 375.63). That GPU is fully capable of geometry shaders of course.
I am using a geometry shader with Vulkan API and it works allright and does everything I exspect it to do. However I get the validation layer report as follows: #[SC]: Shader requires VkPhysicalDeviceFeatures::geometryShader but is not enabled on the device.
Is this a bug? Does someone have similiar issues?
PS: http://vulkan.gpuinfo.org/displayreport.php?id=777#features is saying the support for "Geometry Shader" is "true" as exspected. I am using Vulkan 1.0.30.0 SDK.
Vulkan features work differently from OpenGL extensions. In OpenGL, if an extension is supported, then it's always active. In Vulkan, the fact that a feature is available is not sufficient. When you create a VkDevice, you must explicitly ask for all features you intend to use.
If you didn't ask for the Geometry Shader feature, then you can't use GS's, even if the VkPhysicalDevice advertises support for it.
So the sequence of steps should be to check to see if a VkPhysicalDevice supports the features you want to use, then supply those features in VkDeviceCreateInfo::pEnabledFeatures when you call vkCreateDevice.
Since Vulkan doesn't do validation checking on most of its inputs, the actual driver will likely assume you enabled the feature and just do what it normally would. But it is not required to do so; using a feature which has not been enabled is undefined behavior. So the validation layer is right to stop you.

ARB_draw_buffers available but not supported by shader engine

I'm trying to compile a fragment shader using:
#extension ARB_draw_buffers : require
but compilation fails with the following error:
extension 'ARB_draw_buffers' is not supported
However when I check for availability of this particular extensions, either by calling glGetString (GL_EXTENSIONS) or using OpenGL Extension Viewer I get positive results.
OpenGL version is 3.1,
The grapic card is Intel HD Graphics 3000.
What might be the cause of that?
Your driver in this scenario is 3.1; it is not clear what your targeted OpenGL version is.
If you can establish OpenGL 3.0 as the mininum required version, you can write your shader using #version 130 and avoid the extension directive altogether.
The ARB extension mentioned in the question is only there for drivers that cannot implement all of the features required by OpenGL 3.0, but have the necessary hardware support for this one feature.
That was its intended purpose, but there do not appear to be many driver / hardware combinations in the wild that actually have this problem. You probably do not want the headache of writing code that supports them anyway ;)

How did I just use an OpenGL 3 feature in a 1.1 context?

I just started programming in OpenGL a few weeks ago, and as people suggested to me, I used GLFW as my window handler. I also used GLEW as my extensions handler. So I go through the whole process of making a vertex buffer with three points to draw a triangle and passing it to OpenGL to draw it and I compile and run. No triangle draws, presumably because I didn't have any shaders. So I think to myself "Why don't I lower my OpenGL version through the context creation using GLFW?" and I did that. From OpenGL 3.3 to 1.1 and surely enough, there's a triangle. Success, I thought. Then I remember an article saying that vertex buffers have only been introduce in OpenGL 3, so how have I possibly used an OpenGL 3 feature in a 1.1 context?
The graphics driver is free to give you a context which is a different version than what you requested, as long as they are compatible. For example, you may get a v3.0 context even if you ask for a v1.1 context, as OpenGL 3.0 does not change or remove any features from OpenGL 1.1.
Additionally, often times the only difference between OpenGL versions is what extensions that the GPU must support. If you have a v1.1 context but ARB_vertex_buffer_object is supported, then you will still be able to use VBOs (though you may need to append the ARB suffix to the function names).

Which version of GLSL supports Indexing in Fragment Shader?

I have a fragment shader that iterates over some input data and on old hardwares I get:
error C6013: Only arrays of texcoords may be indexed in this profile, and only with a loop index variable
Googling around I saw a lot of things like "hardware prior to XX doesnt support indexing on fragment shader".
I was wondering if this behavior is standardized in GLSL versions, something like "glsl version pior to XX doesnt support indexing on fragment shader". And if so, which version starts supporting it.
What is your exact hardware ?
Old ATI cards (below X1600) and their drivers have such issues. Most certainly, not the most recent cards from Intel also suffer from this.
"Do you have any sugestion on how to detect if my hardware is capable of indexing in fragment shader?"
The only reliable yet not-so-beautiful way is to get the Renderer information:
glGetString(GL_RENDERER)
and check if this renderer occurs in the list of unsupported ones.
That particular error comes from the Nvidia compiler for nv4x (GeForce 6/7 cards), and is a limitation of the hardware. Any workaround would require disabling the hardware completely and using pure software rendering.
All versions of GLSL support indexing in the language -- this error falls under the catch-all of exceeding the hardware resource limits.

How to test shaders again different version of shaders model

I have many OpenGl shaders. We try to use as many different hardware as possible to evaluate the portability of our product. One of our customer recently ran into some rendering issues it seems that the target machine only provide version shaders model 2.0 all our development/build/test machine (even oldest ones run version 4.0), everything else (OpenGl version, GSLS version ...) seems identical.
I didn't find a way to downgrade the shaders model version since it's automatically provided by the graphic card driver.
Is there a way to manually install or select OpenGl/GLSL/Shader model version in use for develpment/test purpose ?
NOTE: the main target are windows XP SP2/7 (32&64) for both ATI/NVIDIA cards
OpenGL does not have the concept of "shader models"; that's a Direct3D thing. It only has versions of GLSL: 1.10, 1.20, etc.
Every OpenGL version matches a specific GLSL version. GL 2.1 supports GLSL 1.20. GL 3.0 supports GLSL 1.30. For GL 3.3 and above, they stopped fooling around and just used the same version number, so GL 3.3 supports GLSL 3.30. So there's an odd version number gap between GLSL 1.50 (maps to GL 3.2) and GLSL 3.30.
Technically, OpenGL implementations are allowed to refuse to compile older shader versions than the ones that match to the current version. As a practical matter however, you can pretty much shove any GLSL shader into any OpenGL implementation, as long as the shader's version is less than or equal to the version that the OpenGL implementation supports. This hasn't been tested on MacOSX Lion's implementation of GL 3.2 core.
There is one exception: core contexts. If you try to feed a shader through a core OpenGL context that uses functionality removed from the core, it will complain.
There is no way to force OpenGL to provide you with a particular OpenGL version. You can ask it to, with wgl/glXCreateContextAttribs. But that is allowed to give you any version higher than the one you ask for, so long as that version is backwards compatible with what you asked for.