GLSL auto optimization - opengl

I have three question realted with GLSL auto optimaztion(?) process.
Unused variables -> Is glsl removes all variables which not affect on final fragment shader pixel (out variable).
Unused function -> Is glsl removes all unsed defined functions before void main...?
And what about ins and outs variables. An example: I have 100 shaders which send a texture coordinates from vertex shader to fragment shader. In fragment shader these coordiantes no affect on a final color. Will glsl remove this variable?

That is not specified clearly. The OpenGL specification says:
See OpenGL 4.6 Core Profile Specification - 7.6 Uniform Variables - p. 130:
7.6 Uniform Variables
Shaders can declare named uniform variables, as described in the OpenGL Shading Language Specification. A uniform is considered an active uniform if the compiler and linker determine that the uniform will actually be accessed when the executable code is executed. In cases where the compiler and linker cannot make a conclusive determination, the uniform will be considered active.
See OpenGL 4.6 Core Profile Specification - 7.3.1 Program Interfaces - p. 101:
7.3.1 Program Interfaces
When a program object is made part of the current rendering state, its executable code may communicate with other GL pipeline stages or application code through a variety of interfaces. When a program is linked, the GL builds a list of active resources for each interface. Examples of active resources include variables, interface blocks, and subroutines used by shader code. Resources referenced in shader code are considered active unless the compiler and linker can conclusively determine that they have no observable effect on the results produced by the executable code of the program. For example, variables might be considered inactive if they are declared but not used in executable code, used only in a clause of an if statement that would never be executed, used only in functions that are never called, or used only in computations of temporary variables having no effect on any shader output. In cases where the compiler or linker cannot make a conclusive determination, any resource referenced by shader code will be considered active. The set of active resources for any interface is implementation-dependent because it depends on various analysis and optimizations performed by the compiler and linker.
If a program is linked successfully, the GL will generate lists of active resources based on the executable code produced by the link.

Related

Is Vulkan-flavored GLSL compatible with OpenGL?

Vulkan GLSL has some additions to OpenGL GLSL.
For example, in Vulkan GLSL there is the push_constant layout qualifier, which does not exist in OpenGL.
layout( push_constant ) uniform BlockName
{
vec4 data;
} instanceName;
Another example is descriptor set bindings. Also don't exist in OpenGL:
layout(set = 0, binding = 0) uniform BlockName
{
vec4 data;
} instanceName;
My question is: considering this is GLSL code (even if it's Vulkan-flavoured), would that code compile in OpenGL? Maybe the OpenGL compiler can ignore those layout qualifiers as long as the #version is something recent enough that Vulkan has been considered in the GLSL spec?
No.
In the GLSL 4.6 spec, you will find both references to OpenGL and Vulkan.
OpenGL-flavoured GLSL won't compile in Vulkan. This one is more obvious, since in Vulkan you are required, for example, to specify either a set-binding pair or push_constant qualifiers for uniforms, and that concept doesn't exist in OpenGL. So those qualifiers would be missing, and thus won't compile.
To answer the actual question:
Vulkan-flavoured GLSL won't compile in OpenGL either.
In the GLSL 4.6 spec you will find the following paragraphs. They explicitly mention that those two cases mentioned in your question should NOT compile.
About push constants (4.4.3):
When targeting Vulkan, the push_constant qualifier is used to declare
an entire block, and represents a set of push constants, as defined by
the Vulkan API. It is a compile-time error to apply this to anything
other than a uniform block declaration, or when not targeting Vulkan.
About descriptor sets (4.4.5):
The set qualifier is only available when targeting Vulkan. It
specifies the descriptor set this object belongs to. It is a
compile-time error to apply set to a standalone qualifier, to a member
of a block, or when not targeting an API that supports descriptor
sets.

DirectX11 How to strip unused variables from constant buffer?

I am calling D3DReflect() to deduce the layout of the constant buffers used by a compiled shader, and I noticed that they often contain unused variables.
I am already using D3DStripShader() to strip debug info, and I was wondering if there is a similar way to strip those unused variables from constant buffers before calling D3DReflect() ?
Is it usually a good practice ?
Since it would imply most of the time to have one cbuffer per original cbuffer/stage/program, I don't know if the gain of stripping unused variables would be superior to the loss of having more (smaller) cbuffers ?
There is no simple way to do this. The naive view of constant buffers was that everyone would make explicit structures to hold their constants, and those structures would be shared by both shaders and the calling C++ code (or C#, whatever). Thus, if the shader compiler altered the layout of the structure, everything would break.
This makes sense in the microscopic view when working on DX sample apps. For a larger project, many people don't do that. Instead, they have older style shaders with constants declared at global scope. On DX9 and other similar platforms, the constants were mapped to registers, so the compiler could strip unused constants (and it did). For DX11, the compiler takes all of those global constants, and puts them in a special "global" constant buffer. Then it decides that you really care about the structure of that buffer, so it refuses to remove anything.
So, there are generally two options:
Break your constants into multiple constant buffers, grouped roughly into sets that are used together. The compiler WILL strip an entire constant buffer that's unused, so you can use that to get coarse stripping. This is time consuming, and you have to maintain your set partition, but it might be good enough, depending on your situation.
Implement constant stripping yourself. This is what we do... After compiling all of the shaders once, we use the reflection API to get a list of all the constants in the binary. That information includes the flag that indicates if the constant is used or not. For each used constant, we simply declare it again, as normal. For each constant that wasn't used, we emit a similar declaration, but mark the variable as static. That has the effect of removing it from any constant buffer (because it's treated as a compile-time constant by the shader compiler). Then we re-compile the shaders, and the newly generated global constant buffer only contains the used constants.
This is also a bunch of work (and in our implementation, we have to wrap all constant declarations in a macro - the wrapper code builds a big string with all of the static/non-static declarations, and defines STRIPPED_CONSTANT_DEFINITIONS to contain that string):
#if defined (STRIPPED_CONSTANT_DEFINITIONS)
STRIPPED_CONSTANT_DEFINITIONS
#else
bool someConstant;
float4 color;
...
#endif
Note that you need to still declare the stripped constants as static, because any unused code paths or uncalled functions that refer to those variables will cause the shader not to compile, otherwise.

OpenGL: can I mix glBindBuffer with glBindBufferARB?

Is glBindBuffer equivelent to glBindBufferARB ?
Are the enums (like GL_ARRAY_BUFFER and GL_ARRAY_BUFFER_ARB) equivilent? Can I use non-_ARB enum in glBindBufferARB?
Can I mix + match glBindBuffer() calls with glBindBufferARB()?
ALSO: if a card supports the _ARB extension, does it always support the core GL function - even if its OpenGL version isn't up to date??
In general, it is not legal to do that kind of thing, because core functionality and extensions are not exchangeable, even if they have the same name (one notable example is primitive restart).
However, in this particular case, they happen to be exactly the same with the exact same constants, so... although it's not legal, it is "ok" to use them interchangeably (i.e. nobody will notice if you don't tell them).
You cannot in general assume that if an ARB extension is present that the core funciton will be present as well. There exist many ARB extensions that are there solely to allow OpenGL implementations which cannot implement a full version for some reason to nevertheless provide at least some functionality that the hardware can provide.

Peculiar behavior in OpenGL concerning Vertex Shaders and input vertices on core profile 330

using '#version 330 core' on NVIDIA,
By using glBindAttribLocation(program, 0, "in_Vertex"); the input vertex works with an "in vec4 in_Vertex;". However, I noticed without the OGL function call in the client app, it still works. It appears to be 'the default first input variable'. Why? Should it be omitted or be explicitly connected to it via glBindAttribLocation? [What's the ideal according to the standard?] Also "in vec4 gl_Vertex;" works while the spec calls it deprecated and the compiler of shaders does not give any warning. Why? I would have expected for it to at least warn. I guess the last one may be a bug in the compiler (GLSL compiler) but the first issue is especially puzzling.
if you don't bind attributes to a location, the GL will do it for you. Which locations it uses is unspecified but you happened to have the same value.
You can ask the GL which location it's chosen with glGetAttribLocation
Even though it's not specified, I've seen implementations choose to bind locations in order of the shader. First is 0, second is 1, ...
The standard leaves those 2 options open because there are valid use cases for each.
As for deprecation, nvidia clearly stated that they thing deprecation was the wrong decision. In the end somebody has to write the code to emit the warning... So it's not that surprising they would not warn, even if they ought to.

GL_FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT errors

I'm using FBOs in my OpenGL code and I'm seeing compilation errors on GL\_FRAMEBUFFER\_INCOMPLETE\_DUPLICATE\_ATTACHMENT\_EXT. What's the cause of this and how do I fix it?
The cause of this error is an older version of NVIDIA's glext.h, which still has this definition. Whereas the most recent versions of GLEW don't. This leads to compilation errors in code that you had written previously or got from the web.
The GL_FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT definition for FBO used to be present in the specification (and hence in header files). But, it was later removed. The reason for this can be found in the FBO extension specification (look for Issue 87):
(87) What happens if a single image is attached more than once to a
framebuffer object?
RESOLVED: The value written to the pixel is undefined.
There used to be a rule in section 4.4.4.2 that resulted in
FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT if a single
image was attached more than once to a framebuffer object.
FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT 0x8CD8
* A single image is not attached more than once to the
framebuffer object.
{ FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT }
This rule was removed in version #117 of the
EXT_framebuffer_object specification after discussion at the
September 2005 ARB meeting. The rule essentially required an
O(n*lg(n)) search. Some implementations would not need to do that
search if the completeness rules did not require it. Instead,
language was added to section 4.10 which says the values
written to the framebuffer are undefined when this rule is
violated.
To fix this error, remove all usage of GL_FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT from your code.
If this isn't possible in your setup, then add a dummy definition to your glext.h or glew.h file like this:
#define GL_FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT 0x8CD8