Vulkan: Geometry Shader Validation incorrect? - c++

I am currently using a NVIDIA GeForce GTX 780 (from Gigabyte if that matters - I don't know how much this could be affected by the onboard BIOS, I've also got two of them Installed but due to Vulkans incapeability of SLI I only use one device at a time in my code. However in the NVIDIA control center SLI is activated. I use the official Driver version 375.63). That GPU is fully capable of geometry shaders of course.
I am using a geometry shader with Vulkan API and it works allright and does everything I exspect it to do. However I get the validation layer report as follows: #[SC]: Shader requires VkPhysicalDeviceFeatures::geometryShader but is not enabled on the device.
Is this a bug? Does someone have similiar issues?
PS: http://vulkan.gpuinfo.org/displayreport.php?id=777#features is saying the support for "Geometry Shader" is "true" as exspected. I am using Vulkan 1.0.30.0 SDK.

Vulkan features work differently from OpenGL extensions. In OpenGL, if an extension is supported, then it's always active. In Vulkan, the fact that a feature is available is not sufficient. When you create a VkDevice, you must explicitly ask for all features you intend to use.
If you didn't ask for the Geometry Shader feature, then you can't use GS's, even if the VkPhysicalDevice advertises support for it.
So the sequence of steps should be to check to see if a VkPhysicalDevice supports the features you want to use, then supply those features in VkDeviceCreateInfo::pEnabledFeatures when you call vkCreateDevice.
Since Vulkan doesn't do validation checking on most of its inputs, the actual driver will likely assume you enabled the feature and just do what it normally would. But it is not required to do so; using a feature which has not been enabled is undefined behavior. So the validation layer is right to stop you.

Related

"Emulate" minimum OpenGL specs?

We're working with OpenGL 4.3. However, we're afraid that we're using features that are working with our graphics card, but not in the "minimal" required specs for OpenGL 4.3.
Is there any possibility to emulate the minimum behaviour? For example, to make the graphics card reject any non-standard texture formats etc.? (Could also be in software, speed doesn't matter for testing compatibility...)
Update
In the best case, a minimum set in all aspects would be perfect, so it is guaranteed the application works on all graphics cards supporting OpenGL 4.3. So this emulation mode should:
Reject all features/extensions deprecated in 4.3
Reject all features/extensions newer than 4.3
Only support required formats, no optional formats (for example for textures and renderbuffers)
Only support the minimum required precision for calculations
Have the minimum value of the supported Limits than can be queried via GetInteger (for example a MAX_TEXTURE_IMAGE_UNITS of 16)
There is a reference GLSL compiler that will solve half of this problem. But, as for the rest ... AMD, NV and Intel all have their own compliance issues and policies regarding how loosely they believe in following the specification.
I have seen each one of these vendors implicitly enable extensions from versions of OpenGL they should not have (without so much as a warning in the compiler log), and that is just the GLSL side of things. It is likely that Mesa can serve the role of greatest common factor for feature testing, but for OpenGL versions much older than 4.3. Mesa is effectively a minimalist implementation, and usually a few years behind the big hardware vendors.
Ideally GL's debug output extension, which is conveniently a core feature in GL 4.3, would issue API warnings if you use a feature your requested context version does not support. However, each vendor has different levels of support for this; AMD is generally the best. NVIDIA may even require you to enable "OpenGL Expert" mode before it spits out any genuinely useful information.
If all else fails, there is an XML file published by Khronos that you can parse to figure out which version and/or extension ANY OpenGL constant, function or enumerant is provided by. I wrote a simple project to do this with half a day's effort: https://github.com/Andon13/glvs. You could write some sort of validator yourself based on that principle.
There are a number of OpenGL Loading Libraries that will do what you need to some degree. GLEW just gives you everything and lets you pick and choose what you want. But there are others which generate more specific loaders.
GL3w for example generates only the core OpenGL functions, ignoring extensions entirely.
For a more comprehensive solution, there are glLoadGen or GLad. Both of these are generators for the headers and loading code. But both of them allow you to specify exactly which version of OpenGL you want and exactly which extensions you want. GLad even has a web application that can generate headers and download them to your computer.
In the interests of full disclosure, I wrote glLoadGen.

ARB_draw_buffers available but not supported by shader engine

I'm trying to compile a fragment shader using:
#extension ARB_draw_buffers : require
but compilation fails with the following error:
extension 'ARB_draw_buffers' is not supported
However when I check for availability of this particular extensions, either by calling glGetString (GL_EXTENSIONS) or using OpenGL Extension Viewer I get positive results.
OpenGL version is 3.1,
The grapic card is Intel HD Graphics 3000.
What might be the cause of that?
Your driver in this scenario is 3.1; it is not clear what your targeted OpenGL version is.
If you can establish OpenGL 3.0 as the mininum required version, you can write your shader using #version 130 and avoid the extension directive altogether.
The ARB extension mentioned in the question is only there for drivers that cannot implement all of the features required by OpenGL 3.0, but have the necessary hardware support for this one feature.
That was its intended purpose, but there do not appear to be many driver / hardware combinations in the wild that actually have this problem. You probably do not want the headache of writing code that supports them anyway ;)

Image load store equivalent in OpenGL 3

My project should greatly benefit from arbitrary/atomic read and write operations in a texture from glsl shaders. The Image load store extension is what I need. Only problem, my target platform does not support OpenGL 4.
Is there an extension for OGL 3 that achieves similar results? I mean, atomic read/write operations in a texture or shared buffer of some sort from fragment shaders.
Image Load Store and, especially atomic operations are features that must be backed up by specific hardware capabilities, that are very similar to features used in compute shaders. Only some of the GL3 hardware can handle it and only in a limited way.
Image Load Store in core profile since 4.2, so if your hardware (and driver) is capable of OpenGL 4.2, then you don't need any extensions at all
if your hardware (and driver) capabilities is lower than GL 4.2, but higher than GL 3.0, you can, probably, use ARB_shader_image_load_store extension.
quote: OpenGL 3.0 and GLSL 1.30 are required
obviously, not all 3.0 hardware (and drivers) will support this extension, so you must check for its support before use it
I believe, most NVIDIA GL 3.3 hardware supports it, but not AMD or Intel (that's my subjective observations ;) ).
If your hardware is lower than GL 4.2 and not capable of this extension, nothing really you can do. Just have an alternative code path with texture sampling and rendering to texture and no atomics (as I understood this is possible, but without "great benefit of atomic"), or simply report an error to those users, who not yet upgraded their rigs.
Hope it helps.

Programatically determine if OpenGL function is supported by hardware

I am very new to OpenGL so perhaps this is obvious, but is it possible to determine whether a specific function is supported by a given video card? This came up as I am using an old computer with an ATI Radeon 9550 video card, running Lubuntu 12.10 and discovered that it did not support the use of dFdx and dFfy. I was able to get around this problem but now I am curious if I can find out whether a failure has occurred due to a problem like this, and take action based on this, possibly using alternative methods and such.
You can check the supported OpenGL version using glGetString(GL_VERSION) or glGetInteverv(GL_MAJOR_VERSION, …); glGetInteverv(GL_MINOR_VERSION, …);. With OpenGL-3 and onwards, the OpenGL major version directly corresponds to hardware capabilites.
With older versions things are not as strict and due to OpenGL's abstract device model you can not really "query" hardware capabilities. You can check which extensions are supported, which is a good indicator for supported capabilities, as many ARB extensions made it into core functionality. If the extension on which a certain feature of a present OpenGL version is based, is not being supported then the core feature will be emulated, probably.
I know it's very vague and shaky, but that's how it is. The only other option is keeping around a database of GL_RENDERER strings and match against that.
Generally speaking, you cannot. Not for the kind of thing you're talking about.
dFdx and dFdy have been part of GLSL since version 1.10 (the first version in core OpenGL 2.0). Support for them is not optional.
Your problem is that ATI/AMD wants to claim that their older card supports 2.x, but their hardware can't actually do all the things that 2.x requires. So they lie about it, claiming support while silently making these null operations.
OpenGL doesn't have a way to detect perfidy. The only thing you can do is keep around a list of cards and use the GL_VENDOR and GL_RENDERER strings to test against.

Is there a trick to use a opengl 3.x version program on a graphics card which supports opengl 2.x?

I have a onboard graphics card which supports opengl 2.2. Can I run a opengl (let's say 3.3 version) application on it by using some software etc?
OpenGL major versions somewhat refer to available hardware capabilities:
OpenGL-1: fixed function pipeline (DirectX 7 class HW)
OpenGL-2: programmable vertex and fragment shader support.(DirectX 9 class HW)
OpenGL-3: programmable geometry shader support (DirectX 10 class HW)
OpenGL-4: programmable tesselation shader support and a few other nice things (DirectX 11 class HW).
If your GPU supports OpenGL-2 only, then there is no way you could run a OpenGL-3 program, making use of all whistles and bells on it. Your best bet is a software rasterizing implementation.
A few years ago, when shders were something new, NVidia shipped their developer drivers with some higher functionality emulation software rasterizer, to kickstart shader development, so that there were actual applications to run on those new programmable GPUs.
Sure you can, you just have to disable those features. Whether this will work well depends greatly on the app.
The simplest method is to intercept all OpenGL calls, using some manner of DLL hooking, and filter them as necessary. When OGL3 features are used, return a "correct" answer (but don't do anything) or provide null for calls that aren't required.
If done properly, and the app isn't relying on the OGL3 features, this will run without those on your hardware.
If the app does require OGL3 stuff, results will be unreliable at best, and it may be unusable. It really depends on what exactly the app does and what it needs. Providing a null implementation of OGL3 will allow you to run it, but results are up in the air.
No. Well, not really. NVIDIA has some software emulation that might work, but other than that, no.
Your hardware simply can't do what GL 3.0+ asks of it.
also:
I have a onboard graphics card which supports opengl 2.2
There is no OpenGL 2.2. Perhaps you meant 2.1.