Why use 0xffffffff instead of -1? - c++

I'm coming from a Java background and make my first steps in C++ graphics programming. I worked on the ogl-dev tutorials (http://ogldev.atspace.co.uk/) and noticed that a macro with value 0xffffffff is defined. I understand that it encodes -1, but what I do not understand is why I should prefer this encoding over just writing -1. Is it for (backwards) compatibility? Does it have to do with some idiosyncracy of C? Is it an idiom?
An example:
#define ERROR_VALUE 0xffffffff
and subsequently
GLuint location = glGetUniformLocation(m_shaderProg, uniformName);
if (location == ERROR_VALUE)
throw new exception("error happened");
why wouldn't I write
if (location == -1)
or define my macro as
#define ERROR_VALUE -1
Thank you :)

If you check the OpenGL specification (particularly section 7.6., page 134), you will find that glUniformLocation is actually specified to return a GLint, which is a 32-Bit signed integer type. Calling glUniformLocation is equivalent to a call to glGetProgramResourceLocation, which has a return type of GLint as well and is specified to return the value -1 upon error. The comparison of location to the 0xFFFFFFFF put there via replacement of the ERROR_VALUE macro just happens to work in the tutorial code because location is a GLuint rather than a GLint. If glUniformLocation actually returns -1 there, the -1 will first be implicitly converted to GLuint. This implicit conversion will follow modulo arithmetic and the -1 will wrap around to become 0xFFFFFFFF since GLuint is a 32-Bit unsigned integer type. If location was of signed type instead, this would not work correctly. As has been pointed out by Nicol Bolas, if you want to compare to some constant to check for success of this function, compare the result to GL_INVALID_INDEX which is there for exactly this purpose. Contrary to the macro defined in the tutorial code, GL_INVALID_INDEX is specified to be an unsigned integer of value 0xFFFFFFFF, which will cause any comparison to work out correctly because of the usual arithmetic conversions…
As others have also noted in the comments above, I would not recommend that you consider the code presented in these tutorials to be representative of good C++. Using macros to define constants in particular is anything but great (see, e.g., here for more on that). We also don't normally use new to allocate an exception object to throw like here:
throw new exception("error happened");
In general, you'll want to avoid new in C++ unless you really need it (see, e.g., here for more on that). And if dynamic memory allocation is indeed what you need, then you'd use RAII (smart pointers) to take care of correctly handling the resource allocation…

Related

Prefixing hex value with 0x

I'm querying a GL_TYPE in OpenGL and it's reporting back the hexadecimal value as an integer, as it should.
For example: 0x1406 is #define'd as GL_FLOAT but is being given to me from OpenGL in integer form as 5126.
Unfortunately OpenGL doesn't just return the type and it also doesn't just accept the integer (read: hex) value back. It apparently needs it to be prefixed with 0x before being used.
I'm trying to save myself a switch/case and instead cast/convert on the fly but I don't see another way. Do I have any other options? No boost please.
It's unclear what sort of "conversion" you have in mind: 0x1406 equals 5126. They're just different ways of writing the same number in source code, and the compiler translates them both into the binary form that's used at runtime.
You should be able to just use the == operator to compare the result of glGetProgramResource against a constant like GL_FLOAT, regardless of whether that constant is defined as 0x1406 or 5126 in the source code.

Bug in std::basic_string in special case of allocator

I use g++ and I have defined a custom allocator where the size_type is byte.
I am using it with basic_string to create custom strings.
The "basic_string.tcc" code behaves erroneously because in the code of
_S_create(size_type __capacity, size_type __old_capacity, const _Alloc& __alloc)
the code checks for
const size_type __extra = __pagesize - __adj_size % __pagesize;
But all the arithmetic are byte arithmetic and so __pagesize that should have a value 4096, becomes 0 (because 4096 is a multiple of 256) and we have a "division by 0" exception (the code hangs).
The question isn't what should I do, but how could I ask a correction to the above code ? from whom ? (I may implement those corrections).
Before you can request or suggest a change to something like that, you have to establish a strong case that there is indeed a problem that needs to be fixed. In my view there probably is not.
The question is: under which circumstances would it be legitimate (or useful) to define a size_type as unsigned char? I am not aware of anything in the standard that specifically disallows this choice. It is defined as
unsigned integer type - a type that can represent the size of the largest object in the allocation model.
And unsigned char is definitely an unsigned integer type as per s3.9.1. Interesting.
So is it useful? Clearly you seem to think so, but I'm not sure your case is strongly made out. You could work on providing evidence that this is an issue worth resolving.
So it seems to me the process is:
Establish whether unsigned char is intended to be included as a valid choice in the standard, or whether it should be excluded, or was just overlooked.
Raise a 'standards non-compliance' issue with the team for each compiler that has the problem, providing good reasoning and a repro case.
Consider submitting a patch, if this is something within your ability to fix.
Or you could just use short unsigned int instead. I would.

Trouble reading line of code with reference & dereference operators

I'm having trouble reading through a series of * and & operators in order to understand two lies of code within a method. The lines are:
int dummy = 1;
if (*(char*)&dummy) { //Do stuff
}
As best I can determine:
dummy is allocated on the stack and its value is set to 1
&dummy returns the memory location being used by dummy (i.e. where the 1 is)
(char*)&dummy casts &dummy into a pointer to a char, instead of a pointer to an int
*(char*)&dummy dereferences (char*)&dummy, returning whatever char has a numeric value of 1
This seems like an awfully confusing way to say:
if (1){//Do stuuf }
Am I understanding these lines correctly? If so, why would someone do this (other than to confuse me)?
The code is certainly not portable but is apparently intended to detect the endianess of the system: where the non-zero bit for int(1) is located depends on whether the system is big or little endian. In one case the result of the expression is assumed to be 0, in the other case it is assumed to be non-zero. I think it is undefined behavior anyway, though. Also, in theory there is also DS9k endianess which entirely garbles the bytes up (although I don't think there is any system which actually does it).

What is the best way to subtype numeric parameters for OpenGL?

In the OpenGL specification there are certain parameters which take a set of values of the from GL_OBJECTENUMERATIONi with i ranging from 0 to some number indicated by something like GL_MAX_OBJECT. (Lights being an 'object', as one example.) It seems obvious that the number indicated is to be the upper-range is to be passed through the glGet function providing some indirection.
However, According to a literal interpretation of the OpenGL specification the "texture" parameter for glActiveTexture must be one of GL_TEXTUREi, where i ranges from 0 (GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS - 1) means that the set of accepted constants must be GL_TEXTURE0 to GL_TEXTURE35660 because the constant is a constant of the value 35661.
Language-lawyering aside, this setup means that the subtype can be not only disjoint, but out of order as well, such that the following C-ish mapping would be valid:
#define GL_TEXTURE0 0x84C0
#define GL_TEXTURE1 0x84C1
#define GL_TEXTURE2 0x84C2
#define GL_TEXTURE3 0x84A0
#define GL_TEXTURE4 0x84A4
#define GL_TEXTURE5 0x84A5
#define GL_TEXTURE6 0x84A8
#define GL_TEXTURE7 0x84A2
First, is this an issue actually an issue, or are the constants always laid out as if GL_OBJECTi = GL_OBJECTi-1+1?
If that relationship holds true then there is the possibility of using Ada's subtype feature to avoid passing in invalid parameters...
Ideally, something like:
-- This is an old [and incorrect] declaration using constants.
-- It's just here for an example.
SubType Texture_Number is Enum Range
GL_TEXTURE0..Enum'Max(
GL_MAX_TEXTURE_COORDS - 1,
GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS - 1);
But, if the maximum is dynamically determined then we have to do some monkeying about:
With GL_Constants;
Generic
GL_MAX_TEXTURE : Integer;
-- ...and one of those for EACH maximum for the ranges.
Package Types is
Use GL_Constants;
SubType Texture_Number is Enum Range
GL_TEXTURE0..GL_MAX_TEXTURE;
End Types;
with an instantiation of Package GL_TYPES is new Types( GL_MAX_TEXTURE => glGet(GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS) ); and then using this new GL_TYPES package... a little more work, and a little more cumbersome than straight-out subtyping.
Most of this comes from being utterly new to OpenGL and not fully knowing/understanding it; but it does raise interesting questions as to the best way to proceed in building a good, thick Ada binding.
means that the set of accepted constants must be GL_TEXTURE0 to GL_TEXTURE35660 because the constant is a constant of the value 35661.
No, it doesn't mean this. GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS is a implementation dependent value, that is to be queried at runtime using glGetIntegerv(GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS, out)
Regarding the rest: The OpenGL specification states, that GL_TEXTURE = GL_TEXTURE0 + i, and similar for all other object types, with i < n where n is some reasonable number.
This is one of those situations where I don't think getting extra-sexy with the types buys you a whole lot.
If you were to just make a special integer type for GL_TEXTURE (type GL_TEXTURE is 16#84C0# .. 16#8B4C#;), and use that type for all parameters looking for GL Textures, the compiler would prevent the user from doing math between those and other integer objects. That would probably be plenty. It is certianly way better than what the poor C/C++ coders are stuck with!
Then again, I've never been a proponent of super-thick Ada bindings. Ada bindings should be used to make the types more Ada-like, and to convert C error codes into exceptions. If there are other ways to save the user a bit of work, go ahead and do it. However, do not abstract away any of the power of the API!
There were multiple questions in the comments about my choice of using a separate numeric type rather than an Integer subtype.
It is in fact a common Ada noob mistake to start making yourself custom numeric types when integer subtypes will do, and then getting annoyed at all the type conversions you have to do. The classic example is someone making a type for velocity, then another type for distance, then another for force, and then finding they have to do a type conversion on every single damn math operation.
However, there are times when custom numeric types are called for. In particular, you want to use a custom numeric type whenever objects of that type should live in a separate type universe from normal integers. The most common occurrance of this is happens in API bindings, where the number in question is actually a C-ish designation for some resource. The is the exact situation we have here. The only math you will ever want to do on GL_Textures is comparision with the type's bounds, and simple addtion and subtraction by a literal amount. (Most likely GL_Texture'next() will be sufficient.)
As a huge bonus, making it a custom type will prevent the common error of plugging a GL_Texture value into the wrong parameter in the API call. C API calls do love their ints...
In fact, if it were reasonable to sit and type them all in, I suspect you'd be tempted to just make the thing an enumeration. That'd be even less compatible with Integer without conversions, but nobody here would think twice about it.
OK, first rule you need to know about OpenGL: whenever you see something that says, "goes from X to Y", and one of those values is a GL_THINGY, they are not talking about the numeric value of GL_THINGY. They are talking about an implementation-dependent value that you query with GL_THINGY. This is typically an integer, so you use some form of glGetIntegerv to query it.
Next:
this setup means that the subtype can be not only disjoint, but out of order as well, such that the following C-ish mapping would be valid:
No, it wouldn't.
Every actual enumerator in OpenGL is assigned a specific value by the ARB. And the ARB-assigned values for the named GL_TEXTURE''i'' enumerators are:
#define GL_TEXTURE0 0x84C0
#define GL_TEXTURE1 0x84C1
#define GL_TEXTURE2 0x84C2
#define GL_TEXTURE3 0x84C3
#define GL_TEXTURE4 0x84C4
#define GL_TEXTURE5 0x84C5
#define GL_TEXTURE6 0x84C6
#define GL_TEXTURE7 0x84C7
#define GL_TEXTURE8 0x84C8
Notice how they are all in a sequential ordering.
As for the rest, let me quote you from the OpenGL 4.3 specification on glActiveTexture:
An INVALID_ENUM error is generated if an invalid texture is specified. texture is a symbolic constant of the form TEXTURE''i'', indicating that texture unit ''i'' is to be modified. The constants obey TEXTURE''i'' = TEXTURE0 + ''i'' where ''i'' is in the range 0 to ''k'' - 1, and ''k'' is the value of MAX_COMBINED_TEXTURE_IMAGE_UNITS).
If you're creating a binding in some language, the general idea is this: ''don't strongly type certain values''. This one in particular. Just take whatever the user gives you and pass it along. If the user gets an error, they get an error.
Better yet, expose a more reasonable version of glActiveTexture that takes a ''integer'' instead of an enumerator and do the addition yourself.

What is the impact of ignoring OpenGL typedefs?

So, I am using OpenGL which typedefs unsigned integer -> GLuint.
For some reason it feels wrong to sprinkle my program with GLuint, instead of the more generic unsigned integer or uint32_t.
Any thoughts on negative/positive aspects of ignoring the typedefs?
The typedefs are there to make your code more portable. If you ever wanted to move to a platform in which a GLuint may have a different underlying type (For whatever reason), it would be wise to use the typedef.
There is always the chance that your code gets ported to a platform where GLuint != unsigned int. If you are going to ignore the typedefs, then at least add some compile time checks that result in a compilation error if they are different than what is expected.
In general, see the above answers by K-ballo and Chad La Guardia, that's the intent behind such typedefs. That, and in some cases to hide the actual datatype in case the API changes in a future revision (not likely going to happen with OpenGL, but I've seen it happen). In case the datatype changes, this requires a recompilation, but no code changes.
Still, one has to say that library developers often overdo this particular aspect of portabilty to the point of sillyness.
In this particular case, the OpenGL specification is very clear about what a GLuint is (chapter 2.4). It is an unsigned integer of at least 32 bits length. They don't leave much room for interpretation or change.
Insofar, there is no chance it could ever be anything other than an uint32_t (as that is the very definition of uint32_t), and there is no good reason why you couldn't use uint32_t in its stead if you prefer to do so (other than using GLuint makes explicit that a variable is meant to be used with OpenGL, but meh).
It might in principle still be something different than an unsigned int of course, since not much is said about the precise size of an int (other than sizeof(long) >= sizeof(int) >= sizeof(short)).