enum usage for bitwise and in GLSL - c++

Ok, this is probably an easy one for the pro's out there. I want to use an enum in GLSL in order to make an if bitwise and check on it, like in c++.
Pseudo C++ code:
enum PolyFlags
{
Invisible = 0x00000001,
Masked = 0x00000002,
Translucent = 0x00000004,
...
};
...
if ( Flag & Masked)
Alphathreshold = 0.5;
But I am already lost at the beginning because it fails already compiling with:
'enum' : Reserved word
I read that enum's in GLSL are supposed to work as well as the bitwise and, but I can't find a working example.
So, is it actually working/supported and if so, how? I tried already with different #version in the shader, but no luck so far.

The OpenGL Shading Language does not have enumeration types. However, they are reserved keywords, which is why you got that particular compiler error.
C enums are really just syntactic sugar for a value (C++ gives them some type-safety, with enum classes having much more). So you can emulate them in a number of ways. Perhaps the most traditional (and dangerous) is with #defines:
#define Invisible 0x00000001u
#define Masked 0x00000002u
#define Translucent 0x00000004u
A more reasonable way is to declare compile-time const qualified global variables. Any GLSL compiler worth using will optimize them away to nothingness, so they won't take up any more resources than the #define. And it won't have any of the drawbacks of the #define.
const uint Invisible = 0x00000001u;
const uint Masked = 0x00000002u;
const uint Translucent = 0x00000004u;
Obviously, you need to be using a version of GLSL that supports unsigned integers and bitwise operations (aka: GLSL 1.30+, or GLSL ES 3.00+).

Related

Assign struct C++ with HLSL variable

I've only start working with DirectX, so I've get this problem:
I'm trying to push/send/assign a c++ struct in/into/with HLSL variable which have the same datatype with a struct from C++.
In C++:
struct Light
{
Light() {
ZeroMemory(this, sizeof(Light));
}
D3DXVECTOR3 LightPos;
float ID;
};
Light L1;
/.../
ID3D10EffectVariable* L1Var = NULL;
/.../
L1Var = Effect->GetVariableByName("L1")->AsVector();
/.../
L1Var->SetRawValue(&L1, 0, sizeof(Light));
HLSL code:
struct Light {
float3 LightPos;
float ID;
};
Light L1;
I'm trying to sent struct from C++ to 'L1' in HLSL, but I'm not sure with correctness of L1 type in HLSL.
This code is working but I've also get a 0 for whole parameters in L1... I don't know how to fix this, I google it for 5 hours with no result... pls help.
Thanks for your help.
First if you are new to DirectX programming, I'd suggest not investing in DirectX 10. You should use DirectX 11 as it's far better supported, has a lot more relevant utility code, and is supported on every platform that supports DirectX 10. Furthermore, you are using legacy D3DX math, so again you should definitely move to a more modern development environment. There's very few reasons to use the legacy DirectX SDK today. See this blog post and MSDN for the background here. You will find DirectX Tool Kit and it's tutorials a good starting point. If you really want to stick with the Effect system, see Effects 11.
Keep in mind that HLSL Constant Buffers use packing and alignment in subtly different ways than standard C/C++ structures. You get more intuitive behavior if you stick with 4-vector structures where possible instead of using 3-vector versions. In theory your C/C++ and HLSL structures are a 'match' packing the data into a single 4-vector, but various compiler settings and packing rules might throw that off. See Packing Rules for Constant Variables. A good way to verify that is to use static_assert:
static_assert(sizeof(L1) == 16, "CB/struct mismatch");
The problem is most likely your usage of the effect system. L1Var is probably a dummy variable due to a failed lookup, so your SetRawValue isn't going to do anything. From the snippet of HLSL you've provided, it's not clear to me that your L1 HLSL variable is even a constant buffer. Try some debug code:
auto tempVar = Effect->GetVariableByName("L1");
if ( tempVar->IsValid() )
{
D3DX10_EFFECT_VARIABLE_DESC desc={0};
tempVar->GetDesc(desc);
OutputDebugStringA(desc.Name); // Set a breakpoint here and look at desc
OutputDebugStringA("\n");
}
else
{
OutputDebugStringA("L1 is not valid!\n");
}

Is it legal to reuse Bindings for several Shader Storage Blocks

Suppose that I have one shader storage buffer and want to have several views into it, e.g. like this:
layout(std430,binding=0) buffer FloatView { float floats[]; };
layout(std430,binding=0) buffer IntView { int ints[]; };
Is this legal GLSL?
opengl.org says no:
Two blocks cannot use the same index.
However, I could not find such a statement in the GL 4.5 Core Spec or GLSL 4.50 Spec (or the ARB_shader_storage_buffer_object extension description) and my NVIDIA Driver seems to compile such code without errors or warnings.
Does the OpenGL specification expressly forbid this? Apparently not. Or at least, if it does, I can't see where.
But that doesn't mean that it will work cross-platform. When dealing with OpenGL, it's always best to take the conservative path.
If you need to "cast" memory from one representation to another, you should just use separate binding points. It's safer.
There is some official word on this now. I filed a bug on this issue, and they've read it and decided some things. Specifically, the conclusion was:
There are separate binding namespaces for: atomic counters, images, textures, uniform buffers, and SSBOs.
We don't want to allow aliasing on any of them except atomic counters, where aliasing with different offsets (e.g. sharing a binding) is allowed.
In short, don't do this. Hopefully, the GLSL specification will be clarified in this regard.
This was "fixed" in the revision 7 of GLSL 4.5:
It is a compile-time or link-time error to use the same binding number for more than one uniform block or for more than one buffer block.
I say "fixed" because you can still perform aliasing manually via glUniform/ShaderStorageBlockBinding. And the specification doesn't say how this will work exactly.

How to ensure correct struct-field alignment between C++ and OpenGL when passing indirect drawing commands for use by glDrawElementsIndirect?

The documentation for glDrawElementsIndirect, glDrawArraysIndirect, glMultiDrawElementsIndirect, etc. says things like this about the structure of the commands that must be given to them:
The parameters addressed by indirect are packed into a structure that takes the form (in C):
typedef struct {
uint count;
uint instanceCount;
uint firstIndex;
uint baseVertex;
uint baseInstance;
} DrawElementsIndirectCommand;
When a struct representing a vertex is uploaded to OpenGL, it's not just sent there as a block of data--there are also calls like glVertexAttribFormat() that tell OpenGL where to find attribute data within the struct. But as far as I can tell from reading documentation and such, nothing like that happens with these indirect drawing commands. Instead, I gather, you just write your drawing-command struct in C++, like the above, and then send it over via glBufferData or the like.
The OpenGL headers I'm using declare types such as GLuint, so I guess I can be confident that the ints in my command struct will be the right size and have the right format. But what about the alignment of the fields and the size of the struct? It appears that I just have to trust OpenGL to expect exactly what I happen to send--and from what I read, that could in theory vary depending on what compiler I use. Does that mean that, technically, I just have to expect that I will get lucky and have my C++ compiler choose just the struct format that OpenGL and/or my graphics driver and/or my graphics hardware expects? Or is there some guarantee of success here that I'm not grasping?
(Mind you, I'm not truly worried about this. I'm using a perfectly ordinary compiler, and planning to target commonplace hardware, and so I expect that it'll probably "just work" in practice. I'm mainly only curious about what would be considered strictly correct here.)
It is a buffer object (DRAW_INDIRECT_BUFFER to be precise); it is expected to contain a contiguous array of that struct. The correct type is, as you mentioned, GLuint. This is always a 32-bit unsigned integer type. You may see it referred to as uint in the OpenGL specification or in extensions, but understand that in the C language bindings you are expected to add GL to any such type name.
You generally are not going to run into alignment issues on desktop platforms on this data structure since each field is a 32-bit scalar. The GPU can fetch those on any 4-byte boundary, which is what a compiler would align each of the fields in this structure to. If you threw a ubyte somewhere in there, then you would need to worry, but of course you would then be using the wrong data structure.
As such there is only one requirement on the GL side of things, which stipulates that the beginning of this struct has to begin on a word-aligned boundary. That means only addresses (offsets) that are multiples of 4 will work when calling glDrawElementsIndirect (...). Any other address will yield GL_INVALID_OPERATION.

union vs bit masking and bit shifting

What are the disadvantages of using unions when storing some information like a series of bytes and being able to access them at once or one by one.
Example : A Color can be represented in RGBA. So a color type may be defined as,
typedef unsigned int RGBAColor;
Then we can use "shifting and masking" of bits to "retrieve or set" the red, green, blue, alpha values of a RGBAColor object ( just like it is done in Direct3D functions with the macro functions such as D3DCOLOR_ARGB() ).
But what if I used a union,
union RGBAColor
{
unsigned int Color;
struct RGBAColorComponents
{
unsigned char Red;
unsigned char Green;
unsigned char Blue;
unsigned char Alpha;
} Component;
};
Then I will not be needing to always do the shifting (<<) or masking (&) for reading or writing the color components. But is there problem with this? ( I suspect that this has some problem because I haven't seen anyone using such a method. )
Can Endianness Be a broblem? If we always use Component for accessing color components and use Color for accessing the whole thing ( for copying, assigning, etc.. as a whole ) the endianness should not be a problem, right?
-- EDIT --
I found an old post which is the same problem. So i guess this question is kinda repost :P sorry for that. here is the link : Is it a good practice to use unions in C++?
According to the answers it seems that the use of unions for the given example is OK in C++. Because there is no change of data type in there, its just two ways to access the same data. Please correct me if i am wrong. Thanks. :)
This usage of unions is illegal in C++, where a union comprises overlapping, but mutually exclusive objects. You are not allowed to write one member of a union, then read out another member.
It is legal in C where this is a recommended way of type punning.
This relates to the issue of (strict) aliasing, which is a difficulty faced by the compiler when trying to determine whether two objects with different types are distinct. The language standards disagree because the experts are still figuring out what guarantees can safely be provided without sacrificing performance. Personally, I avoid all of this. What would the int actually be used for? The safe way to translate is to copy the bytes, as by memcpy.
There is also the endianness issue, but whether that matters depends on what you want to do with the int.
I believe using the union solves any problems related to endianness, as most likely the RGBA order is defined in network order. Also the fact that each component will be uint8_t or such, can help some compilers to use sign/zero extended loads, storing the low 8 bits directly to a nonaligned byte pointer and being even able to parallelize some byte operations (e.g. arm has some packed 4x8 bit instructions).

What is the best way to subtype numeric parameters for OpenGL?

In the OpenGL specification there are certain parameters which take a set of values of the from GL_OBJECTENUMERATIONi with i ranging from 0 to some number indicated by something like GL_MAX_OBJECT. (Lights being an 'object', as one example.) It seems obvious that the number indicated is to be the upper-range is to be passed through the glGet function providing some indirection.
However, According to a literal interpretation of the OpenGL specification the "texture" parameter for glActiveTexture must be one of GL_TEXTUREi, where i ranges from 0 (GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS - 1) means that the set of accepted constants must be GL_TEXTURE0 to GL_TEXTURE35660 because the constant is a constant of the value 35661.
Language-lawyering aside, this setup means that the subtype can be not only disjoint, but out of order as well, such that the following C-ish mapping would be valid:
#define GL_TEXTURE0 0x84C0
#define GL_TEXTURE1 0x84C1
#define GL_TEXTURE2 0x84C2
#define GL_TEXTURE3 0x84A0
#define GL_TEXTURE4 0x84A4
#define GL_TEXTURE5 0x84A5
#define GL_TEXTURE6 0x84A8
#define GL_TEXTURE7 0x84A2
First, is this an issue actually an issue, or are the constants always laid out as if GL_OBJECTi = GL_OBJECTi-1+1?
If that relationship holds true then there is the possibility of using Ada's subtype feature to avoid passing in invalid parameters...
Ideally, something like:
-- This is an old [and incorrect] declaration using constants.
-- It's just here for an example.
SubType Texture_Number is Enum Range
GL_TEXTURE0..Enum'Max(
GL_MAX_TEXTURE_COORDS - 1,
GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS - 1);
But, if the maximum is dynamically determined then we have to do some monkeying about:
With GL_Constants;
Generic
GL_MAX_TEXTURE : Integer;
-- ...and one of those for EACH maximum for the ranges.
Package Types is
Use GL_Constants;
SubType Texture_Number is Enum Range
GL_TEXTURE0..GL_MAX_TEXTURE;
End Types;
with an instantiation of Package GL_TYPES is new Types( GL_MAX_TEXTURE => glGet(GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS) ); and then using this new GL_TYPES package... a little more work, and a little more cumbersome than straight-out subtyping.
Most of this comes from being utterly new to OpenGL and not fully knowing/understanding it; but it does raise interesting questions as to the best way to proceed in building a good, thick Ada binding.
means that the set of accepted constants must be GL_TEXTURE0 to GL_TEXTURE35660 because the constant is a constant of the value 35661.
No, it doesn't mean this. GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS is a implementation dependent value, that is to be queried at runtime using glGetIntegerv(GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS, out)
Regarding the rest: The OpenGL specification states, that GL_TEXTURE = GL_TEXTURE0 + i, and similar for all other object types, with i < n where n is some reasonable number.
This is one of those situations where I don't think getting extra-sexy with the types buys you a whole lot.
If you were to just make a special integer type for GL_TEXTURE (type GL_TEXTURE is 16#84C0# .. 16#8B4C#;), and use that type for all parameters looking for GL Textures, the compiler would prevent the user from doing math between those and other integer objects. That would probably be plenty. It is certianly way better than what the poor C/C++ coders are stuck with!
Then again, I've never been a proponent of super-thick Ada bindings. Ada bindings should be used to make the types more Ada-like, and to convert C error codes into exceptions. If there are other ways to save the user a bit of work, go ahead and do it. However, do not abstract away any of the power of the API!
There were multiple questions in the comments about my choice of using a separate numeric type rather than an Integer subtype.
It is in fact a common Ada noob mistake to start making yourself custom numeric types when integer subtypes will do, and then getting annoyed at all the type conversions you have to do. The classic example is someone making a type for velocity, then another type for distance, then another for force, and then finding they have to do a type conversion on every single damn math operation.
However, there are times when custom numeric types are called for. In particular, you want to use a custom numeric type whenever objects of that type should live in a separate type universe from normal integers. The most common occurrance of this is happens in API bindings, where the number in question is actually a C-ish designation for some resource. The is the exact situation we have here. The only math you will ever want to do on GL_Textures is comparision with the type's bounds, and simple addtion and subtraction by a literal amount. (Most likely GL_Texture'next() will be sufficient.)
As a huge bonus, making it a custom type will prevent the common error of plugging a GL_Texture value into the wrong parameter in the API call. C API calls do love their ints...
In fact, if it were reasonable to sit and type them all in, I suspect you'd be tempted to just make the thing an enumeration. That'd be even less compatible with Integer without conversions, but nobody here would think twice about it.
OK, first rule you need to know about OpenGL: whenever you see something that says, "goes from X to Y", and one of those values is a GL_THINGY, they are not talking about the numeric value of GL_THINGY. They are talking about an implementation-dependent value that you query with GL_THINGY. This is typically an integer, so you use some form of glGetIntegerv to query it.
Next:
this setup means that the subtype can be not only disjoint, but out of order as well, such that the following C-ish mapping would be valid:
No, it wouldn't.
Every actual enumerator in OpenGL is assigned a specific value by the ARB. And the ARB-assigned values for the named GL_TEXTURE''i'' enumerators are:
#define GL_TEXTURE0 0x84C0
#define GL_TEXTURE1 0x84C1
#define GL_TEXTURE2 0x84C2
#define GL_TEXTURE3 0x84C3
#define GL_TEXTURE4 0x84C4
#define GL_TEXTURE5 0x84C5
#define GL_TEXTURE6 0x84C6
#define GL_TEXTURE7 0x84C7
#define GL_TEXTURE8 0x84C8
Notice how they are all in a sequential ordering.
As for the rest, let me quote you from the OpenGL 4.3 specification on glActiveTexture:
An INVALID_ENUM error is generated if an invalid texture is specified. texture is a symbolic constant of the form TEXTURE''i'', indicating that texture unit ''i'' is to be modified. The constants obey TEXTURE''i'' = TEXTURE0 + ''i'' where ''i'' is in the range 0 to ''k'' - 1, and ''k'' is the value of MAX_COMBINED_TEXTURE_IMAGE_UNITS).
If you're creating a binding in some language, the general idea is this: ''don't strongly type certain values''. This one in particular. Just take whatever the user gives you and pass it along. If the user gets an error, they get an error.
Better yet, expose a more reasonable version of glActiveTexture that takes a ''integer'' instead of an enumerator and do the addition yourself.