I'm querying a GL_TYPE in OpenGL and it's reporting back the hexadecimal value as an integer, as it should.
For example: 0x1406 is #define'd as GL_FLOAT but is being given to me from OpenGL in integer form as 5126.
Unfortunately OpenGL doesn't just return the type and it also doesn't just accept the integer (read: hex) value back. It apparently needs it to be prefixed with 0x before being used.
I'm trying to save myself a switch/case and instead cast/convert on the fly but I don't see another way. Do I have any other options? No boost please.
It's unclear what sort of "conversion" you have in mind: 0x1406 equals 5126. They're just different ways of writing the same number in source code, and the compiler translates them both into the binary form that's used at runtime.
You should be able to just use the == operator to compare the result of glGetProgramResource against a constant like GL_FLOAT, regardless of whether that constant is defined as 0x1406 or 5126 in the source code.
Related
I'm coming from a Java background and make my first steps in C++ graphics programming. I worked on the ogl-dev tutorials (http://ogldev.atspace.co.uk/) and noticed that a macro with value 0xffffffff is defined. I understand that it encodes -1, but what I do not understand is why I should prefer this encoding over just writing -1. Is it for (backwards) compatibility? Does it have to do with some idiosyncracy of C? Is it an idiom?
An example:
#define ERROR_VALUE 0xffffffff
and subsequently
GLuint location = glGetUniformLocation(m_shaderProg, uniformName);
if (location == ERROR_VALUE)
throw new exception("error happened");
why wouldn't I write
if (location == -1)
or define my macro as
#define ERROR_VALUE -1
Thank you :)
If you check the OpenGL specification (particularly section 7.6., page 134), you will find that glUniformLocation is actually specified to return a GLint, which is a 32-Bit signed integer type. Calling glUniformLocation is equivalent to a call to glGetProgramResourceLocation, which has a return type of GLint as well and is specified to return the value -1 upon error. The comparison of location to the 0xFFFFFFFF put there via replacement of the ERROR_VALUE macro just happens to work in the tutorial code because location is a GLuint rather than a GLint. If glUniformLocation actually returns -1 there, the -1 will first be implicitly converted to GLuint. This implicit conversion will follow modulo arithmetic and the -1 will wrap around to become 0xFFFFFFFF since GLuint is a 32-Bit unsigned integer type. If location was of signed type instead, this would not work correctly. As has been pointed out by Nicol Bolas, if you want to compare to some constant to check for success of this function, compare the result to GL_INVALID_INDEX which is there for exactly this purpose. Contrary to the macro defined in the tutorial code, GL_INVALID_INDEX is specified to be an unsigned integer of value 0xFFFFFFFF, which will cause any comparison to work out correctly because of the usual arithmetic conversions…
As others have also noted in the comments above, I would not recommend that you consider the code presented in these tutorials to be representative of good C++. Using macros to define constants in particular is anything but great (see, e.g., here for more on that). We also don't normally use new to allocate an exception object to throw like here:
throw new exception("error happened");
In general, you'll want to avoid new in C++ unless you really need it (see, e.g., here for more on that). And if dynamic memory allocation is indeed what you need, then you'd use RAII (smart pointers) to take care of correctly handling the resource allocation…
As far as I could find, the width of the bool type is implementation-defined. But are there any fixed-width boolean types, or should I stick to, for e.g., a uint8_t to represent a fixed-width bool?
[EDIT]
I made this python script that auto-generates a C++ class which can hold the variables I want to be able to send between a micro controller and my computer. The way it works is that it also keeps two arrays holding a pointer to each one of these variables and the sizeof each one of them. This gives me the necessary information to easily serialize and deserialize each one of these variables. For this to work however the sizeof, endianness, etc of the variable types have to be the same on both sides since I'm using the same generated code on both sides.
I don't know if this will be a problem yet, but I don't expect it to be. I have already worked with this (32bit ARM) chip before and haven't had problems sending integer and float types in the past. However it will be a few days until I'm back and can try booleans out on the chip. This might be a bigger issue later, since this code might be reused on other chips later.
So my question is. Is there a fixed width bool type defined in the standard libraries or should I just use a uint8_t to represent the boolean?
There is not. Just use uint8_t if you need to be sure of the size. Any integer type can easily be treated as boolean in C-related languages. See https://stackoverflow.com/a/4897859/1105015 for a lengthy discussion of how bool's size is not guaranteed by the standard to be any specific value.
OpenGL buffer objects support various data types of well defined width (GL_FLOAT is 32 bit, GL_HALF_FLOAT is 16 bit, GL_INT is 32 bit ...)
How would one go about ensuring cross platform and futureproof types for OpenGL?
For example, feeding float data from a c++ array to to a buffer object and saying its type is GL_FLOAT will not work on platforms where float isn't 32 bit.
While doing some research on this, I noticed a subtle but interesting change in how these types are defined in the GL specs. The change happened between OpenGL 4.1 and 4.2.
Up to OpenGL 4.1, the table that lists the data types (Table 2.2 in the recent spec documents) has the header Minimum Bit Width for the size column, and the table caption says (emphasis added by me):
GL types are not C types. Thus, for example, GL type int is referred to as GLint outside this document, and is not necessarily equivalent to the C type int. An implementation may use more bits than the number indicated in the table to represent a GL type. Correct interpretation of integer values outside the minimum range is not required, however.
Starting with the OpenGL 4.2 spec, the table header changes to Bit Width, and the table caption to:
GL types are not C types. Thus, for example, GL type int is referred to as GLint outside this document, and is not necessarily equivalent to the C type int. An implementation must use exactly the number of bits indicated in the table to represent a GL type.
This influenced the answer to the question. If you go with the latest definition, you can use standard sized type definitions instead of the GL types in your code, and safely assume that they match. For example, you can use int32_t from <cstdint> instead of GLint.
Using the GL types is still the most straightforward solution. Depending on your code architecture and preferences, it might be undesirable, though. If you like to divide your software into components, and want to have OpenGL rendering isolated in a single component while providing a certain level of abstraction, you probably don't want to use GL types all over your code. Yet, once the data reaches the rendering code, it has to match the corresponding GL types.
As a typical example, say you have computational code that produces data you want to render. You may not want to have GLfloat types all over your computational code, because it can be used independent of OpenGL. Yet, once you're ready to display the result of the computation, and want to drop the data into a VBO for OpenGL rendering, the type has to be the same as GLfloat.
There are various approaches you can use. One is what I mentioned above, using sized types from standard C++ header files in your non-rendering code. Similarly, you can define your own typedefs that match the types used by OpenGL. Or, less desirable for performance reasons, you can convert the data where necessary, possibly based on comparing the sizeof() values between the incoming types and the GL types.
In the OpenGL specification there are certain parameters which take a set of values of the from GL_OBJECTENUMERATIONi with i ranging from 0 to some number indicated by something like GL_MAX_OBJECT. (Lights being an 'object', as one example.) It seems obvious that the number indicated is to be the upper-range is to be passed through the glGet function providing some indirection.
However, According to a literal interpretation of the OpenGL specification the "texture" parameter for glActiveTexture must be one of GL_TEXTUREi, where i ranges from 0 (GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS - 1) means that the set of accepted constants must be GL_TEXTURE0 to GL_TEXTURE35660 because the constant is a constant of the value 35661.
Language-lawyering aside, this setup means that the subtype can be not only disjoint, but out of order as well, such that the following C-ish mapping would be valid:
#define GL_TEXTURE0 0x84C0
#define GL_TEXTURE1 0x84C1
#define GL_TEXTURE2 0x84C2
#define GL_TEXTURE3 0x84A0
#define GL_TEXTURE4 0x84A4
#define GL_TEXTURE5 0x84A5
#define GL_TEXTURE6 0x84A8
#define GL_TEXTURE7 0x84A2
First, is this an issue actually an issue, or are the constants always laid out as if GL_OBJECTi = GL_OBJECTi-1+1?
If that relationship holds true then there is the possibility of using Ada's subtype feature to avoid passing in invalid parameters...
Ideally, something like:
-- This is an old [and incorrect] declaration using constants.
-- It's just here for an example.
SubType Texture_Number is Enum Range
GL_TEXTURE0..Enum'Max(
GL_MAX_TEXTURE_COORDS - 1,
GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS - 1);
But, if the maximum is dynamically determined then we have to do some monkeying about:
With GL_Constants;
Generic
GL_MAX_TEXTURE : Integer;
-- ...and one of those for EACH maximum for the ranges.
Package Types is
Use GL_Constants;
SubType Texture_Number is Enum Range
GL_TEXTURE0..GL_MAX_TEXTURE;
End Types;
with an instantiation of Package GL_TYPES is new Types( GL_MAX_TEXTURE => glGet(GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS) ); and then using this new GL_TYPES package... a little more work, and a little more cumbersome than straight-out subtyping.
Most of this comes from being utterly new to OpenGL and not fully knowing/understanding it; but it does raise interesting questions as to the best way to proceed in building a good, thick Ada binding.
means that the set of accepted constants must be GL_TEXTURE0 to GL_TEXTURE35660 because the constant is a constant of the value 35661.
No, it doesn't mean this. GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS is a implementation dependent value, that is to be queried at runtime using glGetIntegerv(GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS, out)
Regarding the rest: The OpenGL specification states, that GL_TEXTURE = GL_TEXTURE0 + i, and similar for all other object types, with i < n where n is some reasonable number.
This is one of those situations where I don't think getting extra-sexy with the types buys you a whole lot.
If you were to just make a special integer type for GL_TEXTURE (type GL_TEXTURE is 16#84C0# .. 16#8B4C#;), and use that type for all parameters looking for GL Textures, the compiler would prevent the user from doing math between those and other integer objects. That would probably be plenty. It is certianly way better than what the poor C/C++ coders are stuck with!
Then again, I've never been a proponent of super-thick Ada bindings. Ada bindings should be used to make the types more Ada-like, and to convert C error codes into exceptions. If there are other ways to save the user a bit of work, go ahead and do it. However, do not abstract away any of the power of the API!
There were multiple questions in the comments about my choice of using a separate numeric type rather than an Integer subtype.
It is in fact a common Ada noob mistake to start making yourself custom numeric types when integer subtypes will do, and then getting annoyed at all the type conversions you have to do. The classic example is someone making a type for velocity, then another type for distance, then another for force, and then finding they have to do a type conversion on every single damn math operation.
However, there are times when custom numeric types are called for. In particular, you want to use a custom numeric type whenever objects of that type should live in a separate type universe from normal integers. The most common occurrance of this is happens in API bindings, where the number in question is actually a C-ish designation for some resource. The is the exact situation we have here. The only math you will ever want to do on GL_Textures is comparision with the type's bounds, and simple addtion and subtraction by a literal amount. (Most likely GL_Texture'next() will be sufficient.)
As a huge bonus, making it a custom type will prevent the common error of plugging a GL_Texture value into the wrong parameter in the API call. C API calls do love their ints...
In fact, if it were reasonable to sit and type them all in, I suspect you'd be tempted to just make the thing an enumeration. That'd be even less compatible with Integer without conversions, but nobody here would think twice about it.
OK, first rule you need to know about OpenGL: whenever you see something that says, "goes from X to Y", and one of those values is a GL_THINGY, they are not talking about the numeric value of GL_THINGY. They are talking about an implementation-dependent value that you query with GL_THINGY. This is typically an integer, so you use some form of glGetIntegerv to query it.
Next:
this setup means that the subtype can be not only disjoint, but out of order as well, such that the following C-ish mapping would be valid:
No, it wouldn't.
Every actual enumerator in OpenGL is assigned a specific value by the ARB. And the ARB-assigned values for the named GL_TEXTURE''i'' enumerators are:
#define GL_TEXTURE0 0x84C0
#define GL_TEXTURE1 0x84C1
#define GL_TEXTURE2 0x84C2
#define GL_TEXTURE3 0x84C3
#define GL_TEXTURE4 0x84C4
#define GL_TEXTURE5 0x84C5
#define GL_TEXTURE6 0x84C6
#define GL_TEXTURE7 0x84C7
#define GL_TEXTURE8 0x84C8
Notice how they are all in a sequential ordering.
As for the rest, let me quote you from the OpenGL 4.3 specification on glActiveTexture:
An INVALID_ENUM error is generated if an invalid texture is specified. texture is a symbolic constant of the form TEXTURE''i'', indicating that texture unit ''i'' is to be modified. The constants obey TEXTURE''i'' = TEXTURE0 + ''i'' where ''i'' is in the range 0 to ''k'' - 1, and ''k'' is the value of MAX_COMBINED_TEXTURE_IMAGE_UNITS).
If you're creating a binding in some language, the general idea is this: ''don't strongly type certain values''. This one in particular. Just take whatever the user gives you and pass it along. If the user gets an error, they get an error.
Better yet, expose a more reasonable version of glActiveTexture that takes a ''integer'' instead of an enumerator and do the addition yourself.
I'm using boost::any in combination with boost::any_cast<> to write some framework code which should take a set of arguments, almost like a function call, and convert them into an array of boost::any types.
So far everything has been working great, except in places where it is hard to predict if the number the caller gives me is going to be signed or unsigned. A lot of code in our existing product (windows based) uses DWORD and BYTE data types for local variables so if one of those variables is used, I get unsigned type. However if a constant is hardcoded, the most likely it'll be a simple number in which case it will be signed.
Since I can't predict if I should do any_cast<int> or any_cast<unsigned int>, 50% of the time my code that reads the boost::any array will fail.
Does anyone know if there's a way to just a number out of boost::any regardless if original type was signed or unsigned?
There isn't a way; boost::any does the simplest form of type-erasure, where the type must match exactly. You can write your own boost::any-like class that supports the additional features you want. I've previously demonstrated how this can be done.
Failing that, you can:
Have two code paths, one for each sign. (Switch to signed path if any_cast<unsigned T> throws.)
Try unsigned, and if that throws, try signed and cast, use a single code path.
Just let the unsigned any_cast throw if it's signed, and force the user to cope.
However, each of these isn't really that good. Do you really need boost::any? Perhaps you want boost::variant instead, if you're expecting a certain list of types.