GLSL, literal constant Input Layout Qualifiers - opengl

I wonder if I may have something like this:
layout (location = attr.POSITION) in vec3 position;
Where for example Attr is a constant structure
const struct Attr
{
int POSITION;
} attr = Attr(0);
I already tried, but it complains
Shader status invalid: 0(34) : error C0000: syntax error, unexpected
integer constant, expecting identifier or template identifier or type
identifier at token ""
Or if there is no way with structs, may I use something else to declare a literal input qualifier such as attr.POSITION?

GLSL has no such thing as a const struct declaration. It does however have compile time constant values:
const int position_loc = 0;
The rules for constant expressions say that a const-qualified variable which is initialized with a constant expression is itself a constant expression.
And there ain't no rule that says that the type of such a const-qualified variable must be a basic type:
struct Attr
{
int position;
};
const Attr attr = {1};
Since attr is initialized with an initialization list containing constant expressions, attr is itself a constant expression. Which means that attr.position is an constant expression too, one of integral type.
And such a compile-time integral constant expression can be used in layout qualifiers, but only if you're using GLSL 4.40 or ARB_ehanced_layouts:
layout(location = attr.position) in vec3 position;
Before that version, you'd be required to use an actual literal. Which means the best you could do would be a #define:
#define position_loc 1
layout(location = position_loc) in vec3 position;
Now personally, I would never rely on such integral-constant-expression-within-struct gymnastics. Few people rely on them, so driver code rarely gets tested in this fashion. So the likelihood of encountering a driver bug is fairly large. The #define method is far more likely to work in practice.

Related

Vulkan Array of Specialization Constants

Is is possible to have an array of specialization constants such that the glsl code looks similar to the following:
layout(constant_id = 0) const vec2 arr[2] = vec2[] (
vec2(2.0f, 2.0f),
vec2(4.0f, 4.0f)
);
or, alternatively:
layout(constant_id = 0) const float arr[4] = float[] (
2.0f, 2.0f,
4.0f, 4.0f
);
As far as I have read there is no limit to the number of specialization constants that can be used so it feels strange that it wouldn't be possible but when I attempt the above the SPIR-V compiler notifies me that 'constant_id' can only be applied to a scalar. Currently I am using a uniform buffer to provide the data but I would like to eliminate the backed buffer and the need to bind the buffer before drawing as well as allow the system to optimize the code during pipeline creation if its possible.
The shading languages (both Vulkan-GLSL and SPIR-V) makes something of a distinction between the definition of a specialization constant within the shader and the interface for specializing those constants. But they go about this process in different ways.
In both languages, the external interface to a specialization constant only works on scalar values. That is, though you can set multiple constants to values, the constants you're setting are each a single scalar.
SPIR-V allows you to declare a specialization constant which is a composite (array/vector/matrix). However, the components of this composite must be either specialization constants or constant values. If those components are scalar specialization constants, you can OpDecorate them with an ID, which the external code will access.
Vulkan (and OpenGL) GLSL go about this slightly differently from raw SPIR-V. In GLSL, a const-qualified value with a constant_id is a specialization constant. These must be scalars.
However, you can also have a const-qualified value that is initialized by values that are either constant expressions or specialization constants. You don't qualify these with a constant_id, but you built them from things that are so qualified:
layout(constant_id = 18) const int scX = 1;
layout(constant_id = 19) const int scZ = 1;
const vec3 scVec = vec3(scX, 1, scZ); // partially specialized vector
const-qualified values that are initialized from specialization constants are called "partially specialized". When this GLSL is converted into SPIR-V, these are converted into OpSpecConstantComposite values.

Clarification of GLSL Function Calling Conventions

I recently encountered some confusion while using a GLSL function which modified (and copied out) one of its input parameters. Let's suppose this is the function:
float v(inout uint idx) {
return 3.14 * idx++;
}
Now let's use that function in a potentially ambiguous way:
uint idx = 0;
const vec4 values = vec4(v(idx), v(idx), v(idx), v(idx));
We might reasonably assume that after the call to the vec4 constructor returns, our vector values should equal {0.00, 3.14, 6.28, 9.42} and idx should equal 4. However, it occured to me to wonder if the order of evaluation of function arguments in GLSL is well defined, and if so whether the above assumed ordering is correct. Alternatively, could this result in (implementation dependent) undefined behavior?
So of course I consulted the GLSL spec (v4.6, rev3, ยง6.1.1, p116, "Function Calling Conventions"), which has the following to say:
All arguments are evaluated at call time, exactly once, in order, from left to right.
So far so good. But then farther down the page:
The order in which output parameters are copied back to the caller is undefined.
I'm not entirely clear on the significance of this second statement.
Does it mean that for the function float doWork(inout uint v1, inout uint v2) {...} that the order in which v1 and v2 are copied back is undefined? This would matter if you did something like passing the same local variable in place of both parameters.
Alternatively, does it instead mean that in the earlier example, the order in which the variable idx is updated is undefined, and as such the ordering of values is also undefined?
Or perhaps both of these cases are undefined? That is, perhaps all copy-back operations on the entire line of code happen in an unordered manner?
It goes without saying that using multiple variables to hold the values prior to the vec4 constructor call would trivially avoid this question entirely, but that's not the point. Rather, I'd like to know how this part of the standard was meant to be interpreted and whether or not my first example would result in idx containing an undefined value.

gcc suppress warning "too small to hold all values of"

I need to use scoped enums so that I can pass them as specific types to our serialiser. I have given explicit integer values for the enum members of Enum1.
I have put two scoped enums matching the description above into a bitfield thus
enum class Enum1 {
value1 = 0x0,
value2 = 0x1,
value3 = 0x2
};
enum class Enum2 {
value1 = 0x0,
value2,
value3,
// ...
value14
};
struct Example {
Enum1 value1 : 2;
Enum2 value2 : 6;
}
Now wherever I use the Example type, I get the warning "'Example::value1' is too small to hold all values of 'Enum1'", and similarly for Enum2. Note that this is not the case for the values we have defined and we are not concerned at all with values outside of these.
This is quite a serious distraction in our build process - the project is large and complex and we don't want to have to scan through many of these warnings (and there are many).
I had a look for a GCC (G++) flag to disable the specific warning. Is there one that I can pass on the command line? Ideally, I would use the warning pragma to disable it locally, if possible.
There is little scope for changing the code structure at this point, but we could really use these spurious warnings removed.
Edit: Added scoped enums with identifiers changed.
The problem is that an scoped enum always has an integral underlying type. By default, it is int but you can change it to any other integral type, such as unsigned char.
Unfortunately you cannot change the underlying type to a bit-field, as they are not real C++ types.
You could try disabling the warning, but a quick skim through the G++ code reveals these lines (gcc/cp/class.c:3468):
else if (TREE_CODE (type) == ENUMERAL_TYPE
&& (0 > (compare_tree_int
(w, TYPE_PRECISION (ENUM_UNDERLYING_TYPE (type))))))
warning_at (DECL_SOURCE_LOCATION (field), 0,
"%qD is too small to hold all values of %q#T",
field, type);
The key here is the call to warning_at(...) instead of warning(OPT_to_disable_the_warning, ...). So currently there is no option to disable it. Except recompiling the compiler yourself!
For what it is worth CLang++-3.7.1 does not warn about it.
As I recall, an enum with a declared underlying type can hold any value of that type, regardless of what enumeration constants are defined. Since you can say
val= enum2{148}
and expect it to work properly, the warning seems correct for that case. You are not declaring a base type, and historically this means that the enum is only guaranteed to be big enough to hold the range of values given by the lowest through highest enumeration constant. So I would expect no warning here. Maybe the new enum class also expects a full range even though the underlying type was automatically determined (or the compiler thinks it does)? You might try using a pure old-syntax enum and see if that works any differently.
Emitting this warning is a bug, because all declared enumerator (values) in fact can be held by the bitfield fields.
Like with traditional enums, a variable of scoped enum type still can hold any value of its underlying type, even ones that don't correspond to a declared enumerator.
However, warning about this like this
warning: 'some bitfield field' is too small to hold all values of 'enum class FOO'
is quite pointless because assigning a too large value as in
Example x;
x.value1 = Enum1(8);
already generates a -Woverflow warning.
Consequently, GCC fixed this warning in version 9.3.
FWIW, Clang never warned about.
IOW, to suppress this warning in GCC you have to upgrade to GCC version 9.3 or later.
For other people like me who end up here from search:
This problem only applies to C++11 scoped enums. If you need bitfield enums, old style enums without explicit storage size work fine:
enum Enum1 {
Enum1_value1 = 0x0,
Enum1_value2 = 0x1,
Enum1_value3 = 0x2
};
enum Enum2 {
Enum2_value1 = 0x0,
Enum2_value2,
Enum2_value3,
// ...
Enum2_value14
};
struct Example {
Enum1 value1 : 2;
Enum2 value2 : 6;
}

Determining struct member byte-offsets at compile-time?

I want to find the byte offset of a struct member at compile-time. For example:
struct vertex_t
{
vec3_t position;
vec3_t normal;
vec2_t texcoord;
}
I would want to know that the byte offset to normal is (in this case it should be 12.)
I know that I could use offsetof, but that is a run-time function and I'd prefer not to use it.
Is what I'm trying to accomplish even possible?
EDIT: offsetof is compile-time, my bad!
offsetof is a compile time constant, if we look at the draft C++ standard section C.3 C standard library paragraph 2 says:
The C++ standard library provides 57 standard macros from the C library, as shown in Table 149.
and the table includes offsetof. If we go to the C99 draft standard section 7.17 Common definitions paragraph 3 includes:
offsetof(type, member-designator)
which expands to an integer constant expression that has type size_t, the value of
which is the offset in bytes [...]
In C:
offsetof is usually actually a macro, and due to its definition, it will probably optimized by the compiler so that it reduces to a constant value. And even if it does become an expression, it is small enough that it should cause almost no overhead.
For example, at the file stddef.h, it is defined as:
#define offsetof(st, m) ((size_t)(&((st *)0)->m))
In C++:
Things get a bit more complicated, since it must resolve offsets for members as methods and other variables. So offsetof is defined as a macro to call another method:
#define offsetof(st, m) __builtin_offsetof(st, m)
If you need it only for structs, you are good enough with offsetof. Else, I don't think it is possible.
Are you sure it is run-time?
The following works..
#include <iostream>
#include <algorithm>
struct vertex_t
{
int32_t position;
int32_t normal;
int32_t texcoord;
};
const int i = offsetof(vertex_t, normal); //compile time..
int main()
{
std::cout<<i;
}
Also see here: offsetof at compile time

Using elements of a constant array as cases in a switch statement

I'm attempting to map a set of key presses to a set of commands. Because I process the commands from several places, I'd like to set up a layer of abstraction between the keys and the commands so that if I change the underlying key mappings, I don't have to change very much code. My current attempt looks like this:
// input.h
enum LOGICAL_KEYS {
DO_SOMETHING_KEY,
DO_SOMETHING_ELSE_KEY,
...
countof_LOGICAL_KEYS
};
static const SDLKey LogicalMappings[countof_LOGICAL_KEYS] = {
SDLK_RETURN, // Do Something
SDLK_ESCAPE, // Do Something Else
...
};
// some_other_file.cpp
...
switch( event.key.keysym.key ) {
case LogicalMappings[ DO_SOMETHING_KEY ]:
doSomething();
break;
case LogicalMappings[ DO_SOMETHING_ELSE_KEY ]:
doSomethingElse();
break;
...
}
When I try to compile this (gcc 4.3.2) I get the error message:
error: 'LogicalMappings' cannot appear in a constant-expression
I don't see why the compiler has a problem with this. I understand why you're not allowed to have variables in a case statement, but I was under the impression that you could use constants, as they could be evaluated at compile-time. Do constant arrays not work with switch statements? If so, I suppose I could just replace the array with something like:
static const SDLKey LOGICAL_MAPPING_DO_SOMETHING = SDLK_RETURN;
static const SDLKey LOGICAL_MAPPING_DO_SOMETHING_ELSE = SDLK_ESCAPE;
...
But that seems much less elegant. Does anybody know why you can't use a constant array here?
EDIT: I've seen the bit of the C++ standard that claims that, "An integral constant-expression can involve only literals (2.13), enumerators, const variables or static data members of integral or enumeration types initialized with constant expressions (8.5)...". I still don't see why a constant array doesn't count as an "enumeration type initialized with a constant expression." It could just be that the answer to my question is "because that's the way that it is," and I'll have to work around it. But if that's the case, it's sort of disappointing, because the compiler really could determine those values at compile-time.
Referring to sections of the C++ standard: 6.4.2 requires that case expressions evaluate to an integral or enumeration constant. 5.19 defines what that is:
An integral constant-expression can involve only literals (2.13), enumerators, const variables or static data members of integral or enumeration types initialized with constant expressions (8.5), non-type template parameters of integral or enumeration types, and sizeof expressions. Floating literals (2.13.3) can appear only if they are cast to integral or enumeration types. Only type conversions to integral or enumeration types can be used. In particular, except in sizeof expressions, functions, class objects, pointers, or references shall not be used, and assignment, increment, decrement, function-call, or comma operators shall not be used.
So if your question was "why does the compiler reject this", one answer is "because the standard says so".
Array references aren't "constant enough", regardless.
You just need to do the mapping slightly differently. You want the same action to occur when the logical key is pressed, so use the logical key codes in the case clauses of the switch statement. Then map the actual key code to the logical code, possibly in the switch itself, or possibly before-hand. You can still use the LogicalMappings array, or a similar construct. And, as an aid to G11N (globalization), you can even make the mapping array non-constant so that different people can remap the keys to suit their needs.
I'll go on a limb here since nobody else replied to this and I've been mostly doing Java recently, not C++, but as far as I seem to recall an array lookup is not considered a constant integer even if the result of the lookup can be determined at compile time. This may even be an issue in the syntax.
Is there a comparison operator defined for the "LogicalMappings"? If not then that is the error.
There is a library called signal in boost that will help you create a event mapping abstraction.If you have time this should be better approach
you could also use an array of function pointers or functors (I suppose functor addresses), to avoid the switch statement altogether & just go from array index -> function pointer / functors directly.
for example (warning, untested code follows)
class Event // you probably have this defined already
{
}
class EventHandler // abstract base class
{
public:
virtual void operator()(Event& e) = 0;
};
class EventHandler1
{
virtual void operator()(Event& e){
// do something here
}
};
class EventHandler2
{
virtual void operator()(Event& e){
// do something here
}
};
EventHandler1 ev1;
EventHandler2 ev2;
EventHandler *LogicalMappings[countof_LOGICAL_KEYS] = {
&ev1,
&ev2,
// more here...
};
// time to use code:
Event event;
if (event.key.keysym.key < countof_LOGICAL_KEYS)
{
EventHandler *p = LogicalMappings[event.key.keysym.key];
if (p != NULL)
(*p)(event);
}
A compiler guru at work explained this to me. The problem is that the array itself is constant, but indices to it aren't necessarily const. Thus, the expression LogicalMappings[some_variable] couldn't be evaluated at compile-time, so the array winds up being stored in memory anyway rather than getting compiled out. There's still no reason why the compiler couldn't statically evaluate array references with a const or literal index, so what I want to do should theoretically be possible, but it's a bit trickier than I'd thought, so I can understand why gcc doesn't do it.