GLSL - why can't index array with a `uint64_t`? - glsl

I have this GLSL code that compiles fine:
#version 450
#extension GL_EXT_shader_explicit_arithmetic_types_int64 : enable
layout (local_size_x = 256) in;
layout(binding = 1) buffer OutBuffer {
uint64_t outBuf[];
};
void main()
{
uint myId = gl_GlobalInvocationID.x;
outBuf[myId] = 0;
}
If I change the type of myId from uint to uint64_t it doesn't compile:
ERROR: calc.comp.glsl:13: '[]' : scalar integer expression required
I can just use uint, but I'm curious why you can't use uint64_t.

Anything other than uint or int needs to be explicitly casted to one of these types when used for indexing arrays:
uint64_t myId = gl_GlobalInvocationID.x;
outBuf[uint(myId)] = 0;
The spec GL_EXT_shader_explicit_arithmetic_types_*** doesn't seem to say anything about using the types it introduces to index arrays.
It defines implicit promotion rules such as uint16_t -> uint32_t (defined to be equivalent to uint).
and these of course work for function parameters,
but curiously you can't even use uint16_t as an array index and expect it to be implicitly promoted to 'uint32_t'; you need to explicitly cast it to uint (or uint32_t).
So we're at the mercy of the original GLSL spec when indexing arrays; use uint or sint, which are the only scalar integer types it knows.

Related

Type Punning via constexpr union

I am maintaining an old code base, that is using a union of an integer type with a bit-field struct for type-punning. My compiler is VS2017. As an example, the code is similar to the following:
struct FlagsType
{
unsigned flag0 : 1;
unsigned flag1 : 1;
unsigned flag2 : 1;
};
union FlagsTypeUnion
{
unsigned flagsAsInt;
FlagsType flags;
};
bool isBitSet(unsigned flagNum, FlagsTypeUnion flags)
{
return ((1u << flagNum) & flags.flagsAsInt);
}
This code has a number of undefined behavior issues. Namely, it is hotly debated whether type punning is defined behavior or not, but on top of that, the implementation of packing bit-fields is implementation-defined. To address these issues, I would like to add static-assert statements to validate that the VS implementation enables using this type of approach. However, when I tried to add the following code, I get error C2131: expression did not evaluate to a constant.
union FlagsTypeUnion
{
unsigned flagsAsInt;
FlagsType flags;
constexpr FlagsTypeUnion(unsigned const f = 0) : flagsAsInt{ f } {}
};
static_assert(FlagsTypeUnion{ 1 }.flags.flag0,
"The code currently assumes bit-fields are packed from LSB to MSB");
Is there any way to add compile-time checks to verify the type-punning and bit-packing code works as the runtime code is assuming? Unfortunately, this code is spread throughout the code base, so changing the structures isn't really feasible.
You might use std::bit_cast (C++20):
struct FlagsType
{
unsigned flag0 : 1;
unsigned flag1 : 1;
unsigned flag2 : 1;
unsigned padding : 32 - 3; // Needed for gcc
};
static_assert(std::is_trivially_constructible_v<FlagsType>);
constexpr FlagsType makeFlagsType(bool flag0, bool flag1, bool flag2)
{
FlagsType res{};
res.flag0 = flag0;
res.flag1 = flag1;
res.flag2 = flag2;
return res;
}
static_assert(std::bit_cast<unsigned>(makeFlagsType(true, false, false)) == 1);
Demo
clang doesn't support it (yet) though.
gcc requires to add explicitly the padding bits for the constexpr check.

Specialization constant used for array size

I'm trying to use a SPIR-V specialization constant to define the size of an array in a uniform block.
#version 460 core
layout(constant_id = 0) const uint count = 0;
layout(binding = 0) uniform Uniform
{
vec4 foo[count];
uint bar[count];
};
void main() {}
With a declaration of count = 0 in the shader, compilation fails with :
array size must be a positive integer
With count = 1 and a specialization of 5, the code compiles but linking fails at runtime with complaints of aliasing :
error: different uniforms (named Uniform.foo[4] and Uniform.bar[3]) sharing the same offset within a uniform block (named Uniform) between shaders
error: different uniforms (named Uniform.foo[3] and Uniform.bar[2]) sharing the same offset within a uniform block (named Uniform) between shaders
error: different uniforms (named Uniform.foo[2] and Uniform.bar[1]) sharing the same offset within a uniform block (named Uniform) between shaders
error: different uniforms (named Uniform.foo[1] and Uniform.bar[0]) sharing the same offset within a uniform block (named Uniform) between shaders
It seems the layout of the uniform block (the offset of each member) is not affected during specialization so foo and bar overlap.
Explicit offsets don't work either and result in the same link errors :
layout(binding = 0, std140) uniform Uniform
{
layout(offset = 0) vec4 foo[count];
layout(offset = count) uint bar[count];
};
Is this intended behavior ? An overlook ?
Can a specialization constant be used to define the size of an array ?
This is an odd quirk of ARB_spir_v. From the extension specification:
Arrays inside a block may be sized with a specialization constant, but the block will have a static layout. Changing the specialized size will not re-layout the block. In the absence of explicit offsets, the layout will be based on the default size of the array.
Since the default size is 0, the struct in the block will be laid out as though the arrays were zero-sized.
Basically, you can use specialization constants to make the arrays shorter than the default, but not longer. And even if you make them shorter, they still take up the same space as the default.
So really, using specialization constants in block array lengths is just a shorthand way of declaring the array with the default value as its length, and then replacing where you would use name.length() with the specialization constant/expression. It's purely syntactic sugar.

C++ map to type for creation

I am not sure how to do this in C++ ... and so I would like to seek the knowledge of the Stack :)
Basically I have an enum defining some values I care about:
enum class VertexField : uint16_t
{
Position = 0,
Color,
Normal,
Count,
Invalid
};
Then I have an array that indicates the size of each enum if we allocate memory for that enum value:
const uint16_t sFieldSizes[] =
{
12, //Position, 4bytes each, float vec3
4, //Color, 1bytes each, unorm vec4
12 //Normal, 4bytes each, float vec3
};
Now what I want to do is to add another array that tells me the type ... so conceptually it would look like:
const TYPE sFieldTypes[] =
{
glm::vec3,
uint32_t,
glm::vec3
};
I know the above isn't code that works but what are ways to do what I want that compiles?

what is the c++ equivalent type if MPI_FLOAT_INT

I am writing a type_traits library for mpi, but when I define float int as a type for MPI_FLOAT_INT, I get two or more variable type in declaration error, what is the equivalent type of MPI_FLOAT_INT in c++?
The only authoritative source, the MPI standard, defines MPI_FLOAT_INT as (Section 5.9.4 MINLOC and MAXLOC):
The datatype MPI_FLOAT_INT is as if defined by the following sequence of instructions.
type[0] = MPI_FLOAT
type[1] = MPI_INT
disp[0] = 0
disp[1] = sizeof(float)
block[0] = 1
block[1] = 1
MPI_TYPE_CREATE_STRUCT(2, block, disp, type, MPI_FLOAT_INT)
Similar statements apply for MPI_LONG_INT and MPI_DOUBLE_INT.
It means that the type corresponds to struct { float a; int b; }, but only if there is guarantee that no padding space is inserted between a and b. This might not be the case on systems where int is 64-bit and has to be aligned on 8 bytes. One might need to instruct the compiler to generate packed structures, e.g. with GCC:
#pragma pack(push, 1)
struct float_int
{
float a;
int b;
}
#pragma pack(pop)
Note that MPI_FLOAT_INT is intended to be used in MINLOC and MAXLOC reductions in order to find out both the min/max float value and the lowest numbered rank that is holding it.
According to the documentation:
MPI_FLOAT_INT
This is a pair of a 32-bit floating point number followed by a 32-bit integer.
An equivalent would be std::pair<float, int> or struct float_int{ float f; int i;};.
You can try to do a bit better with int32_t instead of int and static_assert(sizeof(float) == 4); in an attempt to get the size correct.
According to Linux man page, MPI_FLOAT_INT is a struct defined as:
struct { float, int }

Fragment shader for unsigned integer textures

I am using following shader for unsigned integer textures to read a data:
Fragment shader:
Code :
#version 150
out uvec4 fragColor;
uniform uint factor;
void main()
{
uint temp=factor;
temp=temp/2;
fragColor = uvec4(temp,temp,temp,temp);
}
But i am getting error on driver A:
"Compile failed.
ERROR: 0:7: '/' : Wrong operand types. No operation '/' exists that takes a left-hand operand of type 'uint' and a right operand of type 'const int' (and there is no acceptable conversion)
ERROR: 1 compilation errors. No code generated."
on driver B it runs perfectly. Is driver A is buggy or my shader is wrong? if wrong, how can i achieve the same result?
Try this:
temp = temp / uint(2);
GLSL does not allow implicit conversions between signed and unsigned ints, so both operands of a binary operand must be the same. Use:
temp = temp / 2u;
to use an unsigned int constant.