Is Vulkan-flavored GLSL compatible with OpenGL? - glsl

Vulkan GLSL has some additions to OpenGL GLSL.
For example, in Vulkan GLSL there is the push_constant layout qualifier, which does not exist in OpenGL.
layout( push_constant ) uniform BlockName
{
vec4 data;
} instanceName;
Another example is descriptor set bindings. Also don't exist in OpenGL:
layout(set = 0, binding = 0) uniform BlockName
{
vec4 data;
} instanceName;
My question is: considering this is GLSL code (even if it's Vulkan-flavoured), would that code compile in OpenGL? Maybe the OpenGL compiler can ignore those layout qualifiers as long as the #version is something recent enough that Vulkan has been considered in the GLSL spec?

No.
In the GLSL 4.6 spec, you will find both references to OpenGL and Vulkan.
OpenGL-flavoured GLSL won't compile in Vulkan. This one is more obvious, since in Vulkan you are required, for example, to specify either a set-binding pair or push_constant qualifiers for uniforms, and that concept doesn't exist in OpenGL. So those qualifiers would be missing, and thus won't compile.
To answer the actual question:
Vulkan-flavoured GLSL won't compile in OpenGL either.
In the GLSL 4.6 spec you will find the following paragraphs. They explicitly mention that those two cases mentioned in your question should NOT compile.
About push constants (4.4.3):
When targeting Vulkan, the push_constant qualifier is used to declare
an entire block, and represents a set of push constants, as defined by
the Vulkan API. It is a compile-time error to apply this to anything
other than a uniform block declaration, or when not targeting Vulkan.
About descriptor sets (4.4.5):
The set qualifier is only available when targeting Vulkan. It
specifies the descriptor set this object belongs to. It is a
compile-time error to apply set to a standalone qualifier, to a member
of a block, or when not targeting an API that supports descriptor
sets.

Related

GLSL auto optimization

I have three question realted with GLSL auto optimaztion(?) process.
Unused variables -> Is glsl removes all variables which not affect on final fragment shader pixel (out variable).
Unused function -> Is glsl removes all unsed defined functions before void main...?
And what about ins and outs variables. An example: I have 100 shaders which send a texture coordinates from vertex shader to fragment shader. In fragment shader these coordiantes no affect on a final color. Will glsl remove this variable?
That is not specified clearly. The OpenGL specification says:
See OpenGL 4.6 Core Profile Specification - 7.6 Uniform Variables - p. 130:
7.6 Uniform Variables
Shaders can declare named uniform variables, as described in the OpenGL Shading Language Specification. A uniform is considered an active uniform if the compiler and linker determine that the uniform will actually be accessed when the executable code is executed. In cases where the compiler and linker cannot make a conclusive determination, the uniform will be considered active.
See OpenGL 4.6 Core Profile Specification - 7.3.1 Program Interfaces - p. 101:
7.3.1 Program Interfaces
When a program object is made part of the current rendering state, its executable code may communicate with other GL pipeline stages or application code through a variety of interfaces. When a program is linked, the GL builds a list of active resources for each interface. Examples of active resources include variables, interface blocks, and subroutines used by shader code. Resources referenced in shader code are considered active unless the compiler and linker can conclusively determine that they have no observable effect on the results produced by the executable code of the program. For example, variables might be considered inactive if they are declared but not used in executable code, used only in a clause of an if statement that would never be executed, used only in functions that are never called, or used only in computations of temporary variables having no effect on any shader output. In cases where the compiler or linker cannot make a conclusive determination, any resource referenced by shader code will be considered active. The set of active resources for any interface is implementation-dependent because it depends on various analysis and optimizations performed by the compiler and linker.
If a program is linked successfully, the GL will generate lists of active resources based on the executable code produced by the link.

Why are there two brackets [ in the c++ vertex functions?

I am watching the introduction video's from Apple about Metal and MetalKit.
The sample code for the shaders has these double brackets like [[buffer(0)]] arguments. Why are there two brackets? Does it mean anything or is it just to indicate that there is keyword "buffer" following? There is no such construct in standard c++, is there?
vertex Vertex vertex_func(constant Vertex *vertices [[buffer(0)]],
constant Uniforms &uniforms [[buffer(1)]],
uint vid [[vertex_id]])
Also what would be a good 1 or 2 week fun project as an introduction into GP-GPU? Something manageable for a novice with good math skills but no artistic skills.
These are called attributes, and their syntax and behavior are defined in section 7.6.1 of the C++ standard, Attribute syntax and semantics. They look like this in the grammar:
attribute-specifier:
[ [ attribute-list ] ]
alignment-specifier
The Metal shading language defines numerous attributes that allow you to associate various semantics with variables, struct/class members, and function arguments (including argument table bindings as in your question).
The Metal Shading Language Specification defers to the C++ standard on this, so the C++ standard really is the reference to consult:
The Metal programming language is based on the C++14 Specification
(a.k.a., the ISO/IEC JTC1/SC22/ WG21 N4431 Language Specification)
with specific extensions and restrictions. Please refer to the C++14
Specification for a detailed description of the language grammar.
With this [[ x ]] you declare att that are getting passed between shaders ans CPU
I quote:
The [[ … ]] syntax is used to declare attributes such as resource
locations, shader inputs, and built-in variables that are passed back
and forth between shaders and CPU

Can a ARB program(shader pair) use non ARB buffer objects and vertex arrays?

Can a ARB program(shader pair) use non ARB buffer objects and vertex arrays? Non ARB means with no extension, like NV, ATI, ARB, EXT or other.
Yes, this is perfectly possible. Note that core functionality without the ARB suffix, actually are the ARB extensions, made part of the regular specification. In general there's interoperability between extensions. Also each extension clearly has to state how it interacts with the rest of OpenGL and all other extensions (in existance at the time of specification).

OpenGL: can I mix glBindBuffer with glBindBufferARB?

Is glBindBuffer equivelent to glBindBufferARB ?
Are the enums (like GL_ARRAY_BUFFER and GL_ARRAY_BUFFER_ARB) equivilent? Can I use non-_ARB enum in glBindBufferARB?
Can I mix + match glBindBuffer() calls with glBindBufferARB()?
ALSO: if a card supports the _ARB extension, does it always support the core GL function - even if its OpenGL version isn't up to date??
In general, it is not legal to do that kind of thing, because core functionality and extensions are not exchangeable, even if they have the same name (one notable example is primitive restart).
However, in this particular case, they happen to be exactly the same with the exact same constants, so... although it's not legal, it is "ok" to use them interchangeably (i.e. nobody will notice if you don't tell them).
You cannot in general assume that if an ARB extension is present that the core funciton will be present as well. There exist many ARB extensions that are there solely to allow OpenGL implementations which cannot implement a full version for some reason to nevertheless provide at least some functionality that the hardware can provide.

Peculiar behavior in OpenGL concerning Vertex Shaders and input vertices on core profile 330

using '#version 330 core' on NVIDIA,
By using glBindAttribLocation(program, 0, "in_Vertex"); the input vertex works with an "in vec4 in_Vertex;". However, I noticed without the OGL function call in the client app, it still works. It appears to be 'the default first input variable'. Why? Should it be omitted or be explicitly connected to it via glBindAttribLocation? [What's the ideal according to the standard?] Also "in vec4 gl_Vertex;" works while the spec calls it deprecated and the compiler of shaders does not give any warning. Why? I would have expected for it to at least warn. I guess the last one may be a bug in the compiler (GLSL compiler) but the first issue is especially puzzling.
if you don't bind attributes to a location, the GL will do it for you. Which locations it uses is unspecified but you happened to have the same value.
You can ask the GL which location it's chosen with glGetAttribLocation
Even though it's not specified, I've seen implementations choose to bind locations in order of the shader. First is 0, second is 1, ...
The standard leaves those 2 options open because there are valid use cases for each.
As for deprecation, nvidia clearly stated that they thing deprecation was the wrong decision. In the end somebody has to write the code to emit the warning... So it's not that surprising they would not warn, even if they ought to.