Ensuring OpenGL compatible types in c++ - c++

OpenGL buffer objects support various data types of well defined width (GL_FLOAT is 32 bit, GL_HALF_FLOAT is 16 bit, GL_INT is 32 bit ...)
How would one go about ensuring cross platform and futureproof types for OpenGL?
For example, feeding float data from a c++ array to to a buffer object and saying its type is GL_FLOAT will not work on platforms where float isn't 32 bit.

While doing some research on this, I noticed a subtle but interesting change in how these types are defined in the GL specs. The change happened between OpenGL 4.1 and 4.2.
Up to OpenGL 4.1, the table that lists the data types (Table 2.2 in the recent spec documents) has the header Minimum Bit Width for the size column, and the table caption says (emphasis added by me):
GL types are not C types. Thus, for example, GL type int is referred to as GLint outside this document, and is not necessarily equivalent to the C type int. An implementation may use more bits than the number indicated in the table to represent a GL type. Correct interpretation of integer values outside the minimum range is not required, however.
Starting with the OpenGL 4.2 spec, the table header changes to Bit Width, and the table caption to:
GL types are not C types. Thus, for example, GL type int is referred to as GLint outside this document, and is not necessarily equivalent to the C type int. An implementation must use exactly the number of bits indicated in the table to represent a GL type.
This influenced the answer to the question. If you go with the latest definition, you can use standard sized type definitions instead of the GL types in your code, and safely assume that they match. For example, you can use int32_t from <cstdint> instead of GLint.
Using the GL types is still the most straightforward solution. Depending on your code architecture and preferences, it might be undesirable, though. If you like to divide your software into components, and want to have OpenGL rendering isolated in a single component while providing a certain level of abstraction, you probably don't want to use GL types all over your code. Yet, once the data reaches the rendering code, it has to match the corresponding GL types.
As a typical example, say you have computational code that produces data you want to render. You may not want to have GLfloat types all over your computational code, because it can be used independent of OpenGL. Yet, once you're ready to display the result of the computation, and want to drop the data into a VBO for OpenGL rendering, the type has to be the same as GLfloat.
There are various approaches you can use. One is what I mentioned above, using sized types from standard C++ header files in your non-rendering code. Similarly, you can define your own typedefs that match the types used by OpenGL. Or, less desirable for performance reasons, you can convert the data where necessary, possibly based on comparing the sizeof() values between the incoming types and the GL types.

Related

Understanding the Datatypes in OpenGL

The way OpenGL Datatypes are used there confuses me a bit. There is for example the unsigned integer "GLuint" and is is used for shader-objects as well as various different buffers-objects. What is this GLuint and what are these datatypes about?
They are, in general, just aliases for different types. For example GLuint is normally a regular uint. The reason they exist is because the graphics driver expects a specific integer size, e.g. a uint64_t, but data types like int are not necessarily consistent across compilers and architectures.
Thus OpenGL provides it's own type aliases to ensure that handles are always exactly the size it needs to function properly.

Is the number of color attachments bounded by API

The OpenGL specification requires that a framebuffer supports at least 8 color attachments. Now, OpenGL uses compile-time constants (at least on my system), for stuff like GL_COLOR_ATTACHMENTi and GL_DEPTH_ATTACHMENT attachment follows 32 units after GL_COLOR_ATTACHMENT0. Doesn't this mean that regardless of how beefy the hardware is, it will never be possible to use more than 32 color attachments? To clarify, this compiles perfectly with GLEW on Ubuntu 16.04:
static_assert(GL_COLOR_ATTACHMENT0 + 32==GL_DEPTH_ATTACHMENT,"");
and since it is static_assert, this would be true for any hardware configuration (unless the driver installer modify the header files, which would result in non-portable binaries). Wouldn't separate functions for different attachment classes would have been better as it removes the possibility of colliding constants?
It is important to note the difference in spec language. glActiveTexture says this about its parameter:
An INVALID_ENUM error is generated if an invalid texture is specified.
texture is a symbolic constant of the form TEXTUREi, indicating that texture unit i is to be modified. Each TEXTUREi adheres to TEXTUREi = TEXTURE0 + i, where i is in the range zero to k−1, and k is the value of MAX_COMBINED_TEXTURE_IMAGE_UNITS
This text explicitly allows you to compute the enum value, explaining exactly how to do so and what the limits are.
Compare this to what it says about glFramebufferTexture:
An INVALID_ENUM error is generated if attachment is not one of the attachments in table 9.2, and attachment is not COLOR_ATTACHMENTm where m is greater than or equal to the value of MAX_COLOR_ATTACHMENTS.
It looks similar. But note that it doesn't have the language about the value of those enumerators. There's nothing in that description about COLOR_ATTACHMENTm = COLOR_ATTACHMENT0 + m.
As such, it is illegal to use any value other than those specific enums. Now yes, the spec does guarantee elsewhere that COLOR_ATTACHMENTm = COLOR_ATTACHMENT0 + m. But because the guarantee isn't in that section, that section explicitly prohibits the use of any value other than an actual enumerator. Regardless of how you compute it, the result must be an actual enumerator.
So to answer your question, at present, there are only 32 color attachment enumerators. Therefore, MAX_COLOR_ATTACHMENT has an effective maximum value of 32.
The OpenGL 4.5 spec states in Section 9.2:
... by the framebuffer attachment points named COLOR_ATTACHMENT0 through COLOR_ATTACHMENTn. Each COLOR_ATTACHMENTi adheres to COLOR_ATTACHMENTi = COLOR_ATTACHMENT0 + i
and as a footnote
The header files define tokens COLOR_ATTACHMENTi for i in the range [0, 31]. Most implementations support fewer than 32 color attachments, and it is an INVALID_OPERATION error to pass an unsupported attachment name to a command accepting color attachment names.
My interpretation of this is, that it is (as long as the hardware supports it) perfectly fine to use COLOR_ATTACHMENT0 + 32 and so on to address more than 32 attachment points. So there is no real limitation of supported color attachments, just the constants are not defined directly. Why it was designed that way can only be answered by people from the khronos group.

Is it legal to reuse Bindings for several Shader Storage Blocks

Suppose that I have one shader storage buffer and want to have several views into it, e.g. like this:
layout(std430,binding=0) buffer FloatView { float floats[]; };
layout(std430,binding=0) buffer IntView { int ints[]; };
Is this legal GLSL?
opengl.org says no:
Two blocks cannot use the same index.
However, I could not find such a statement in the GL 4.5 Core Spec or GLSL 4.50 Spec (or the ARB_shader_storage_buffer_object extension description) and my NVIDIA Driver seems to compile such code without errors or warnings.
Does the OpenGL specification expressly forbid this? Apparently not. Or at least, if it does, I can't see where.
But that doesn't mean that it will work cross-platform. When dealing with OpenGL, it's always best to take the conservative path.
If you need to "cast" memory from one representation to another, you should just use separate binding points. It's safer.
There is some official word on this now. I filed a bug on this issue, and they've read it and decided some things. Specifically, the conclusion was:
There are separate binding namespaces for: atomic counters, images, textures, uniform buffers, and SSBOs.
We don't want to allow aliasing on any of them except atomic counters, where aliasing with different offsets (e.g. sharing a binding) is allowed.
In short, don't do this. Hopefully, the GLSL specification will be clarified in this regard.
This was "fixed" in the revision 7 of GLSL 4.5:
It is a compile-time or link-time error to use the same binding number for more than one uniform block or for more than one buffer block.
I say "fixed" because you can still perform aliasing manually via glUniform/ShaderStorageBlockBinding. And the specification doesn't say how this will work exactly.

How to ensure correct struct-field alignment between C++ and OpenGL when passing indirect drawing commands for use by glDrawElementsIndirect?

The documentation for glDrawElementsIndirect, glDrawArraysIndirect, glMultiDrawElementsIndirect, etc. says things like this about the structure of the commands that must be given to them:
The parameters addressed by indirect are packed into a structure that takes the form (in C):
typedef struct {
uint count;
uint instanceCount;
uint firstIndex;
uint baseVertex;
uint baseInstance;
} DrawElementsIndirectCommand;
When a struct representing a vertex is uploaded to OpenGL, it's not just sent there as a block of data--there are also calls like glVertexAttribFormat() that tell OpenGL where to find attribute data within the struct. But as far as I can tell from reading documentation and such, nothing like that happens with these indirect drawing commands. Instead, I gather, you just write your drawing-command struct in C++, like the above, and then send it over via glBufferData or the like.
The OpenGL headers I'm using declare types such as GLuint, so I guess I can be confident that the ints in my command struct will be the right size and have the right format. But what about the alignment of the fields and the size of the struct? It appears that I just have to trust OpenGL to expect exactly what I happen to send--and from what I read, that could in theory vary depending on what compiler I use. Does that mean that, technically, I just have to expect that I will get lucky and have my C++ compiler choose just the struct format that OpenGL and/or my graphics driver and/or my graphics hardware expects? Or is there some guarantee of success here that I'm not grasping?
(Mind you, I'm not truly worried about this. I'm using a perfectly ordinary compiler, and planning to target commonplace hardware, and so I expect that it'll probably "just work" in practice. I'm mainly only curious about what would be considered strictly correct here.)
It is a buffer object (DRAW_INDIRECT_BUFFER to be precise); it is expected to contain a contiguous array of that struct. The correct type is, as you mentioned, GLuint. This is always a 32-bit unsigned integer type. You may see it referred to as uint in the OpenGL specification or in extensions, but understand that in the C language bindings you are expected to add GL to any such type name.
You generally are not going to run into alignment issues on desktop platforms on this data structure since each field is a 32-bit scalar. The GPU can fetch those on any 4-byte boundary, which is what a compiler would align each of the fields in this structure to. If you threw a ubyte somewhere in there, then you would need to worry, but of course you would then be using the wrong data structure.
As such there is only one requirement on the GL side of things, which stipulates that the beginning of this struct has to begin on a word-aligned boundary. That means only addresses (offsets) that are multiples of 4 will work when calling glDrawElementsIndirect (...). Any other address will yield GL_INVALID_OPERATION.

glBufferData second arg is GLsizeiptr not GLsizei, why?

Basically that's it, why does glBufferData take a pointer instead of an int? This arg is supposed to be the size of the buffer object, so why not GLsizei?
OpenGL doc on glBufferData https://www.opengl.org/sdk/docs/man/html/glBufferData.xhtml
When vertex buffer objects were introduced via the OpenGL extension mechanism, a new type GLsizeiptrARB was created and the following rationale was provided:
What type should <offset> and <size> arguments use?
RESOLVED: We define new types that will work well on 64-bit
systems, analogous to C's "intptr_t". The new type "GLintptrARB"
should be used in place of GLint whenever it is expected that
values might exceed 2 billion. The new type "GLsizeiptrARB"
should be used in place of GLsizei whenever it is expected
that counts might exceed 2 billion. Both types are defined as
signed integers large enough to contain any pointer value. As a
result, they naturally scale to larger numbers of bits on systems
with 64-bit or even larger pointers.
The offsets introduced in this extension are typed GLintptrARB,
consistent with other GL parameters that must be non-negative,
but are arithmetic in nature (not uint), and are not sizes; for
example, the xoffset argument to TexSubImage*D is of type GLint.
Buffer sizes are typed GLsizeiptrARB.
The idea of making these types unsigned was considered, but was
ultimately rejected on the grounds that supporting buffers larger
than 2 GB was not deemed important on 32-bit systems.
When this extension was accepted into core OpenGL, the extension-compliant type GLsizeiptrARB for the type got a standardized name GLsizeiptr which you see in the function signature today.