GLSL endianness macro - opengl

Is there any way to check endianness of target machine inside GLSL? For example in OpenCL I could use
#ifdef __ENDIAN_LITTLE__
uint x = v << 1;
#else
uint x = v >> 1;
#endif

Searching for "endian" in the GLSL 4.60 specification doesn't yield any match. A search for "order" only reveals irrelevant results. So I suppose there's no special way to query endianness of the GPU in GLSL.
Your best bet is to detect it with a specially-crafted shader once at the beginning (like, read the supplied byte buffer as a uint and put an endianness-dependent result in the output fragment, then glReadPixels), and then, based on the result, just insert the appropriate #defines into the shaders that depend on this.

Related

GLSL produced by SPIRV-cross from SPIR-V breaks std140 rules?

I've put my snippet HLSL code here: https://shader-playground.timjones.io/d9011ef7826a68ed93394792c2edb732
I compile HLSL with DXC to SPIR-V and then use SPIRV-Cross to get the GLSL code.
The GLSL constant buffer is tagged with std140 and it contains vec3 and float.
This according to my knowledge will not work. Shouldn't the GL_EXT_scalar_block_layout be used here? The constant block should be tagged with scalar instead of std140. Am I missing something obvious here? Thanks.
For an arbitrary input buffer, there isn't a generic OpenGL memory layout that is exactly equivalent to the DX constant buffer layout.
DX constant buffers will add padding needed to stop single variables spanning 16 byte boundaries, but variables themselves are only 4 byte aligned.
GL std140 uniform buffers will always align vec3 on a 16 byte boundary. This has no equivalent in DX.
GL std430 uniform buffers (if supported via GL_EXT_scalar_block_layout) will always align vec3 on a 16 byte boundary. This has no equivalent in DX.
GL scalar uniform buffers (if supported via GL_EXT_scalar_block_layout) will only pad to component element size, and don't care about 16 byte boundaries. This has no equivalent in DX.
Things get even more fun if you start throwing around struct and array types ...
TLDR, if you want a fixed binary memory layout that is portable between DX and GL/GLES and Vulkan you take some responsibility for designing a portable memory layout for your constant buffers. You can't throw arbitrary layouts around and expect it to work.

enum usage for bitwise and in GLSL

Ok, this is probably an easy one for the pro's out there. I want to use an enum in GLSL in order to make an if bitwise and check on it, like in c++.
Pseudo C++ code:
enum PolyFlags
{
Invisible = 0x00000001,
Masked = 0x00000002,
Translucent = 0x00000004,
...
};
...
if ( Flag & Masked)
Alphathreshold = 0.5;
But I am already lost at the beginning because it fails already compiling with:
'enum' : Reserved word
I read that enum's in GLSL are supposed to work as well as the bitwise and, but I can't find a working example.
So, is it actually working/supported and if so, how? I tried already with different #version in the shader, but no luck so far.
The OpenGL Shading Language does not have enumeration types. However, they are reserved keywords, which is why you got that particular compiler error.
C enums are really just syntactic sugar for a value (C++ gives them some type-safety, with enum classes having much more). So you can emulate them in a number of ways. Perhaps the most traditional (and dangerous) is with #defines:
#define Invisible 0x00000001u
#define Masked 0x00000002u
#define Translucent 0x00000004u
A more reasonable way is to declare compile-time const qualified global variables. Any GLSL compiler worth using will optimize them away to nothingness, so they won't take up any more resources than the #define. And it won't have any of the drawbacks of the #define.
const uint Invisible = 0x00000001u;
const uint Masked = 0x00000002u;
const uint Translucent = 0x00000004u;
Obviously, you need to be using a version of GLSL that supports unsigned integers and bitwise operations (aka: GLSL 1.30+, or GLSL ES 3.00+).

How to ensure correct struct-field alignment between C++ and OpenGL when passing indirect drawing commands for use by glDrawElementsIndirect?

The documentation for glDrawElementsIndirect, glDrawArraysIndirect, glMultiDrawElementsIndirect, etc. says things like this about the structure of the commands that must be given to them:
The parameters addressed by indirect are packed into a structure that takes the form (in C):
typedef struct {
uint count;
uint instanceCount;
uint firstIndex;
uint baseVertex;
uint baseInstance;
} DrawElementsIndirectCommand;
When a struct representing a vertex is uploaded to OpenGL, it's not just sent there as a block of data--there are also calls like glVertexAttribFormat() that tell OpenGL where to find attribute data within the struct. But as far as I can tell from reading documentation and such, nothing like that happens with these indirect drawing commands. Instead, I gather, you just write your drawing-command struct in C++, like the above, and then send it over via glBufferData or the like.
The OpenGL headers I'm using declare types such as GLuint, so I guess I can be confident that the ints in my command struct will be the right size and have the right format. But what about the alignment of the fields and the size of the struct? It appears that I just have to trust OpenGL to expect exactly what I happen to send--and from what I read, that could in theory vary depending on what compiler I use. Does that mean that, technically, I just have to expect that I will get lucky and have my C++ compiler choose just the struct format that OpenGL and/or my graphics driver and/or my graphics hardware expects? Or is there some guarantee of success here that I'm not grasping?
(Mind you, I'm not truly worried about this. I'm using a perfectly ordinary compiler, and planning to target commonplace hardware, and so I expect that it'll probably "just work" in practice. I'm mainly only curious about what would be considered strictly correct here.)
It is a buffer object (DRAW_INDIRECT_BUFFER to be precise); it is expected to contain a contiguous array of that struct. The correct type is, as you mentioned, GLuint. This is always a 32-bit unsigned integer type. You may see it referred to as uint in the OpenGL specification or in extensions, but understand that in the C language bindings you are expected to add GL to any such type name.
You generally are not going to run into alignment issues on desktop platforms on this data structure since each field is a 32-bit scalar. The GPU can fetch those on any 4-byte boundary, which is what a compiler would align each of the fields in this structure to. If you threw a ubyte somewhere in there, then you would need to worry, but of course you would then be using the wrong data structure.
As such there is only one requirement on the GL side of things, which stipulates that the beginning of this struct has to begin on a word-aligned boundary. That means only addresses (offsets) that are multiples of 4 will work when calling glDrawElementsIndirect (...). Any other address will yield GL_INVALID_OPERATION.

GLSL: Data Distortion

I'm using OpenGL 3.3 GLSL 1.5 compatibility. I'm getting a strange problem with my vertex data. I'm trying to pass an index value to the fragment shader, but the value seems to change based on my camera position.
This should be simple : I pass a GLfloat through the vertex shader to the fragment shader. I then convert this value to an unsigned integer. The value is correct the majority of the time, except for the edges of the fragment. No matter what I do the same distortion appears. Why is does my camera position change this value? Even in the ridiculous example below, tI erratically equals something other than 1.0;
uint i;
if (tI == 1.0) i = 1;
else i = 0;
vec4 color = texture2D(tex[i], t) ;
If I send integer data instead of float data I get the exact same problem. It does not seem to matter what I enter as vertex Data. The value I enter into the data is not consistent across the fragment. The distortion even looks the exact same each time.
What you are doing here is invalid in OpenGL/GLSL 3.30.
Let me quote the GLSL 3.30 specification, section 4.1.7 "Samplers" (emphasis mine):
Samplers aggregated into arrays within a shader (using square brackets
[ ]) can only be indexed with integral constant expressions (see
section 4.3.3 “Constant Expressions”).
Using a varying as index to a texture does not represent a constant expression as defined by the spec.
Beginning with GL 4.0, this was somewhat relaxed. The GLSL 4.00 specification states now the following (still my emphasis):
Samplers aggregated into arrays within a shader (using square brackets
[ ]) can only be indexed with a dynamically uniform integral
expression, otherwise results are undefined.
With dynamically uniform being defined as follows:
A fragment-shader expression is dynamically uniform if all fragments
evaluating it get the same resulting value. When loops are involved,
this refers to the expression's value for the same loop iteration.
When functions are involved, this refers to calls from the same call
point.
So now this is a bit tricky. If all fragment shader invocations actaully get the same value for that varying, it would be allowed, I guess. But it is unclear that your code guarantees that. You should also take into account that the fragment might be even sampled outside of the primitive.
However, you should never check floats for equality. There will be numerical issues. I don't know what exactly you are trying to achieve here, but you might use some simple rounding behavior, or use an integer varying. You also should disable the interpolation of the value in any case using the flat qualifier (which is required for the integer case anyway), which should greatly improve the changes of that construct to become dynamically uniform.

why gl_VertexID is not an unsigned int?

I am in the process of designing a shader program that makes use of the built-in variable gl_VertexID:
gl_VertexID — contains the index of the current vertex
The variable is defined as a signed int. Why it is not an unsigned int? What happens when it is used with very large arrays (e.g. a 2^30 long array)?
Does GLSL treat it as an unsigned int?
I want to use its content as an output of my shader (e.g writing it into an output FBO buffer) I will read its content using glReadPixels with GL_RED_INTEGER as format and either GL_INT or GL_UNSIGNED_INT as type.
Which one is correct?
If I use GL_INT I will not be able to address very large arrays.
In order to use GL_UNSIGNED_INT I might cast the generated gl_VertexID to a uint inside my shader but again, how to access long array?
Most likely historical reasons. gl_VertexID was first defined as part of the EXT_gpu_shader4 extension. This extension is defined based on OpenGL 2.0:
This extension is written against the OpenGL 2.0 specification and version 1.10.59 of the OpenGL Shading Language specification.
GLSL did not yet support unsigned types at the time. They were not introduced until OpenGL 3.0.
I cannot tell if OpenGL might treat the vertex id as unsigned int, but you could most likely create your own (full 32-bit) ID. I have done this some time ago by specifying a rgba8888 vertex color attribute which is converted to an id in the shader by bit-shifting the r,g,b, and a components.
Doing this i also noticed that this wasn't anyhow slower than using gl_VertexID, which seemed to introduce some overhead. Nowadays just use an unsigned int attribute.
Also, i wonder, why would you want to read back the gl_VertexID?
(i did this once for an algorithm and it turned out to be not thought through and now has been replaced by sth more efficient ;) )