The way OpenGL Datatypes are used there confuses me a bit. There is for example the unsigned integer "GLuint" and is is used for shader-objects as well as various different buffers-objects. What is this GLuint and what are these datatypes about?
They are, in general, just aliases for different types. For example GLuint is normally a regular uint. The reason they exist is because the graphics driver expects a specific integer size, e.g. a uint64_t, but data types like int are not necessarily consistent across compilers and architectures.
Thus OpenGL provides it's own type aliases to ensure that handles are always exactly the size it needs to function properly.
Related
As far as I could find, the width of the bool type is implementation-defined. But are there any fixed-width boolean types, or should I stick to, for e.g., a uint8_t to represent a fixed-width bool?
[EDIT]
I made this python script that auto-generates a C++ class which can hold the variables I want to be able to send between a micro controller and my computer. The way it works is that it also keeps two arrays holding a pointer to each one of these variables and the sizeof each one of them. This gives me the necessary information to easily serialize and deserialize each one of these variables. For this to work however the sizeof, endianness, etc of the variable types have to be the same on both sides since I'm using the same generated code on both sides.
I don't know if this will be a problem yet, but I don't expect it to be. I have already worked with this (32bit ARM) chip before and haven't had problems sending integer and float types in the past. However it will be a few days until I'm back and can try booleans out on the chip. This might be a bigger issue later, since this code might be reused on other chips later.
So my question is. Is there a fixed width bool type defined in the standard libraries or should I just use a uint8_t to represent the boolean?
There is not. Just use uint8_t if you need to be sure of the size. Any integer type can easily be treated as boolean in C-related languages. See https://stackoverflow.com/a/4897859/1105015 for a lengthy discussion of how bool's size is not guaranteed by the standard to be any specific value.
Basically that's it, why does glBufferData take a pointer instead of an int? This arg is supposed to be the size of the buffer object, so why not GLsizei?
OpenGL doc on glBufferData https://www.opengl.org/sdk/docs/man/html/glBufferData.xhtml
When vertex buffer objects were introduced via the OpenGL extension mechanism, a new type GLsizeiptrARB was created and the following rationale was provided:
What type should <offset> and <size> arguments use?
RESOLVED: We define new types that will work well on 64-bit
systems, analogous to C's "intptr_t". The new type "GLintptrARB"
should be used in place of GLint whenever it is expected that
values might exceed 2 billion. The new type "GLsizeiptrARB"
should be used in place of GLsizei whenever it is expected
that counts might exceed 2 billion. Both types are defined as
signed integers large enough to contain any pointer value. As a
result, they naturally scale to larger numbers of bits on systems
with 64-bit or even larger pointers.
The offsets introduced in this extension are typed GLintptrARB,
consistent with other GL parameters that must be non-negative,
but are arithmetic in nature (not uint), and are not sizes; for
example, the xoffset argument to TexSubImage*D is of type GLint.
Buffer sizes are typed GLsizeiptrARB.
The idea of making these types unsigned was considered, but was
ultimately rejected on the grounds that supporting buffers larger
than 2 GB was not deemed important on 32-bit systems.
When this extension was accepted into core OpenGL, the extension-compliant type GLsizeiptrARB for the type got a standardized name GLsizeiptr which you see in the function signature today.
OpenGL buffer objects support various data types of well defined width (GL_FLOAT is 32 bit, GL_HALF_FLOAT is 16 bit, GL_INT is 32 bit ...)
How would one go about ensuring cross platform and futureproof types for OpenGL?
For example, feeding float data from a c++ array to to a buffer object and saying its type is GL_FLOAT will not work on platforms where float isn't 32 bit.
While doing some research on this, I noticed a subtle but interesting change in how these types are defined in the GL specs. The change happened between OpenGL 4.1 and 4.2.
Up to OpenGL 4.1, the table that lists the data types (Table 2.2 in the recent spec documents) has the header Minimum Bit Width for the size column, and the table caption says (emphasis added by me):
GL types are not C types. Thus, for example, GL type int is referred to as GLint outside this document, and is not necessarily equivalent to the C type int. An implementation may use more bits than the number indicated in the table to represent a GL type. Correct interpretation of integer values outside the minimum range is not required, however.
Starting with the OpenGL 4.2 spec, the table header changes to Bit Width, and the table caption to:
GL types are not C types. Thus, for example, GL type int is referred to as GLint outside this document, and is not necessarily equivalent to the C type int. An implementation must use exactly the number of bits indicated in the table to represent a GL type.
This influenced the answer to the question. If you go with the latest definition, you can use standard sized type definitions instead of the GL types in your code, and safely assume that they match. For example, you can use int32_t from <cstdint> instead of GLint.
Using the GL types is still the most straightforward solution. Depending on your code architecture and preferences, it might be undesirable, though. If you like to divide your software into components, and want to have OpenGL rendering isolated in a single component while providing a certain level of abstraction, you probably don't want to use GL types all over your code. Yet, once the data reaches the rendering code, it has to match the corresponding GL types.
As a typical example, say you have computational code that produces data you want to render. You may not want to have GLfloat types all over your computational code, because it can be used independent of OpenGL. Yet, once you're ready to display the result of the computation, and want to drop the data into a VBO for OpenGL rendering, the type has to be the same as GLfloat.
There are various approaches you can use. One is what I mentioned above, using sized types from standard C++ header files in your non-rendering code. Similarly, you can define your own typedefs that match the types used by OpenGL. Or, less desirable for performance reasons, you can convert the data where necessary, possibly based on comparing the sizeof() values between the incoming types and the GL types.
I have a short question. Why does OpenGL come with its own datatypes for standard types like int, unsigned int, char, and so on? And do I have to use them instead of the build in C++ datatypes?
For example the OpenGL equivalent to unsigned int is GLuint and for a c string there is GLchar* instead of char*.
For example the OpenGL equivalent to unsigned int is GLuint
No it isn't, and that's exactly why you should use OpenGL's data types when interfacing with OpenGL.
GLuint is not "equivalent" to unsigned int. GLuint is required to be 32 bits in size. It is always 32-bits in size. unsigned int might be 32-bits in size. It might be 64-bits. You don't know, and C isn't going to tell you (outside of sizeof).
These datatypes will be defined for each platform, and they may be defined differently for different platforms. You use them because, even if they are defined differently, they will always come out to the same sizes. The sizes that OpenGL APIs expect and require.
I'm not an expert of OpenGL, but usually frameworks/platforms such as OpenGL, Qt, etc. define their own datatypes so that the meaning and the capacity of the underlying datatype remains the same across different OSes. Usually this behavior is obtained using C/C++ preprocessor macros, but for what concerns GLuint, it seems to be just a typedef in gl.h:
typedef unsigned int GLuint;
So the answer is yes. You should use the framework's datatypes to ensure a good portability of your code within that framework across OSes.
So, I am using OpenGL which typedefs unsigned integer -> GLuint.
For some reason it feels wrong to sprinkle my program with GLuint, instead of the more generic unsigned integer or uint32_t.
Any thoughts on negative/positive aspects of ignoring the typedefs?
The typedefs are there to make your code more portable. If you ever wanted to move to a platform in which a GLuint may have a different underlying type (For whatever reason), it would be wise to use the typedef.
There is always the chance that your code gets ported to a platform where GLuint != unsigned int. If you are going to ignore the typedefs, then at least add some compile time checks that result in a compilation error if they are different than what is expected.
In general, see the above answers by K-ballo and Chad La Guardia, that's the intent behind such typedefs. That, and in some cases to hide the actual datatype in case the API changes in a future revision (not likely going to happen with OpenGL, but I've seen it happen). In case the datatype changes, this requires a recompilation, but no code changes.
Still, one has to say that library developers often overdo this particular aspect of portabilty to the point of sillyness.
In this particular case, the OpenGL specification is very clear about what a GLuint is (chapter 2.4). It is an unsigned integer of at least 32 bits length. They don't leave much room for interpretation or change.
Insofar, there is no chance it could ever be anything other than an uint32_t (as that is the very definition of uint32_t), and there is no good reason why you couldn't use uint32_t in its stead if you prefer to do so (other than using GLuint makes explicit that a variable is meant to be used with OpenGL, but meh).
It might in principle still be something different than an unsigned int of course, since not much is said about the precise size of an int (other than sizeof(long) >= sizeof(int) >= sizeof(short)).