The glTexImage2D function accepts, among others, the so-called "base internal formats" listed below:
GL_DEPTH_COMPONENT
GL_DEPTH_STENCIL
GL_RED
GL_RG
GL_RGB
GL_RGB
These appear to lack any type or size information. Quoting the documentation:
If an application wants to store the texture at a certain resolution or in a certain format, it can request the resolution and format with internalFormat. The GL will choose an internal representation that closely approximates that requested by internalFormat, but it may not match exactly. (The representations specified by GL_RED, GL_RG, GL_RGB, and GL_RGBA must match exactly.)
These formats are not listed in the documentation for glTexStorage2D, and I cannot find anything more precise about them. I've apparently made GL_RGBA work by dumb luck, but I would like to understand better what I'm doing here.
What exactly happens when I use a base internal format?
How are the type, size and layout of storage chosen?
Am I expected to query the chosen format afterwards and adjust the rest of my program, such as GLSL sampler types?
The reason glTexStorage* doesn't allow "base internal formats" is because you should not be using them. It only supports internal formats with well-defined behavior.
What exactly happens when I use a base internal format?
You get an image format that is whatever the driver wants to give you, within the limits listed below.
How are the type, size and layout of storage chosen?
The "type" will for such textures always be unsigned, normalized integers. It will have at least the channels in the base internal format.
Everything else is entirely up to the driver. If the driver wants to add more channels than you specified, it can do so. If the driver wants to give you a 16-bit-per-channel format, it can do so.
Am I expected to query the chosen format afterwards and adjust the rest of my program, such as GLSL sampler types?
There's nothing to query, since they're always unsigned, normalized images. So you use sampler*, not usampler* or isampler* in shaders.
Related
The OpenGL specification requires that a framebuffer supports at least 8 color attachments. Now, OpenGL uses compile-time constants (at least on my system), for stuff like GL_COLOR_ATTACHMENTi and GL_DEPTH_ATTACHMENT attachment follows 32 units after GL_COLOR_ATTACHMENT0. Doesn't this mean that regardless of how beefy the hardware is, it will never be possible to use more than 32 color attachments? To clarify, this compiles perfectly with GLEW on Ubuntu 16.04:
static_assert(GL_COLOR_ATTACHMENT0 + 32==GL_DEPTH_ATTACHMENT,"");
and since it is static_assert, this would be true for any hardware configuration (unless the driver installer modify the header files, which would result in non-portable binaries). Wouldn't separate functions for different attachment classes would have been better as it removes the possibility of colliding constants?
It is important to note the difference in spec language. glActiveTexture says this about its parameter:
An INVALID_ENUM error is generated if an invalid texture is specified.
texture is a symbolic constant of the form TEXTUREi, indicating that texture unit i is to be modified. Each TEXTUREi adheres to TEXTUREi = TEXTURE0 + i, where i is in the range zero to k−1, and k is the value of MAX_COMBINED_TEXTURE_IMAGE_UNITS
This text explicitly allows you to compute the enum value, explaining exactly how to do so and what the limits are.
Compare this to what it says about glFramebufferTexture:
An INVALID_ENUM error is generated if attachment is not one of the attachments in table 9.2, and attachment is not COLOR_ATTACHMENTm where m is greater than or equal to the value of MAX_COLOR_ATTACHMENTS.
It looks similar. But note that it doesn't have the language about the value of those enumerators. There's nothing in that description about COLOR_ATTACHMENTm = COLOR_ATTACHMENT0 + m.
As such, it is illegal to use any value other than those specific enums. Now yes, the spec does guarantee elsewhere that COLOR_ATTACHMENTm = COLOR_ATTACHMENT0 + m. But because the guarantee isn't in that section, that section explicitly prohibits the use of any value other than an actual enumerator. Regardless of how you compute it, the result must be an actual enumerator.
So to answer your question, at present, there are only 32 color attachment enumerators. Therefore, MAX_COLOR_ATTACHMENT has an effective maximum value of 32.
The OpenGL 4.5 spec states in Section 9.2:
... by the framebuffer attachment points named COLOR_ATTACHMENT0 through COLOR_ATTACHMENTn. Each COLOR_ATTACHMENTi adheres to COLOR_ATTACHMENTi = COLOR_ATTACHMENT0 + i
and as a footnote
The header files define tokens COLOR_ATTACHMENTi for i in the range [0, 31]. Most implementations support fewer than 32 color attachments, and it is an INVALID_OPERATION error to pass an unsupported attachment name to a command accepting color attachment names.
My interpretation of this is, that it is (as long as the hardware supports it) perfectly fine to use COLOR_ATTACHMENT0 + 32 and so on to address more than 32 attachment points. So there is no real limitation of supported color attachments, just the constants are not defined directly. Why it was designed that way can only be answered by people from the khronos group.
The documentation for glDrawElementsIndirect, glDrawArraysIndirect, glMultiDrawElementsIndirect, etc. says things like this about the structure of the commands that must be given to them:
The parameters addressed by indirect are packed into a structure that takes the form (in C):
typedef struct {
uint count;
uint instanceCount;
uint firstIndex;
uint baseVertex;
uint baseInstance;
} DrawElementsIndirectCommand;
When a struct representing a vertex is uploaded to OpenGL, it's not just sent there as a block of data--there are also calls like glVertexAttribFormat() that tell OpenGL where to find attribute data within the struct. But as far as I can tell from reading documentation and such, nothing like that happens with these indirect drawing commands. Instead, I gather, you just write your drawing-command struct in C++, like the above, and then send it over via glBufferData or the like.
The OpenGL headers I'm using declare types such as GLuint, so I guess I can be confident that the ints in my command struct will be the right size and have the right format. But what about the alignment of the fields and the size of the struct? It appears that I just have to trust OpenGL to expect exactly what I happen to send--and from what I read, that could in theory vary depending on what compiler I use. Does that mean that, technically, I just have to expect that I will get lucky and have my C++ compiler choose just the struct format that OpenGL and/or my graphics driver and/or my graphics hardware expects? Or is there some guarantee of success here that I'm not grasping?
(Mind you, I'm not truly worried about this. I'm using a perfectly ordinary compiler, and planning to target commonplace hardware, and so I expect that it'll probably "just work" in practice. I'm mainly only curious about what would be considered strictly correct here.)
It is a buffer object (DRAW_INDIRECT_BUFFER to be precise); it is expected to contain a contiguous array of that struct. The correct type is, as you mentioned, GLuint. This is always a 32-bit unsigned integer type. You may see it referred to as uint in the OpenGL specification or in extensions, but understand that in the C language bindings you are expected to add GL to any such type name.
You generally are not going to run into alignment issues on desktop platforms on this data structure since each field is a 32-bit scalar. The GPU can fetch those on any 4-byte boundary, which is what a compiler would align each of the fields in this structure to. If you threw a ubyte somewhere in there, then you would need to worry, but of course you would then be using the wrong data structure.
As such there is only one requirement on the GL side of things, which stipulates that the beginning of this struct has to begin on a word-aligned boundary. That means only addresses (offsets) that are multiples of 4 will work when calling glDrawElementsIndirect (...). Any other address will yield GL_INVALID_OPERATION.
OpenGL buffer objects support various data types of well defined width (GL_FLOAT is 32 bit, GL_HALF_FLOAT is 16 bit, GL_INT is 32 bit ...)
How would one go about ensuring cross platform and futureproof types for OpenGL?
For example, feeding float data from a c++ array to to a buffer object and saying its type is GL_FLOAT will not work on platforms where float isn't 32 bit.
While doing some research on this, I noticed a subtle but interesting change in how these types are defined in the GL specs. The change happened between OpenGL 4.1 and 4.2.
Up to OpenGL 4.1, the table that lists the data types (Table 2.2 in the recent spec documents) has the header Minimum Bit Width for the size column, and the table caption says (emphasis added by me):
GL types are not C types. Thus, for example, GL type int is referred to as GLint outside this document, and is not necessarily equivalent to the C type int. An implementation may use more bits than the number indicated in the table to represent a GL type. Correct interpretation of integer values outside the minimum range is not required, however.
Starting with the OpenGL 4.2 spec, the table header changes to Bit Width, and the table caption to:
GL types are not C types. Thus, for example, GL type int is referred to as GLint outside this document, and is not necessarily equivalent to the C type int. An implementation must use exactly the number of bits indicated in the table to represent a GL type.
This influenced the answer to the question. If you go with the latest definition, you can use standard sized type definitions instead of the GL types in your code, and safely assume that they match. For example, you can use int32_t from <cstdint> instead of GLint.
Using the GL types is still the most straightforward solution. Depending on your code architecture and preferences, it might be undesirable, though. If you like to divide your software into components, and want to have OpenGL rendering isolated in a single component while providing a certain level of abstraction, you probably don't want to use GL types all over your code. Yet, once the data reaches the rendering code, it has to match the corresponding GL types.
As a typical example, say you have computational code that produces data you want to render. You may not want to have GLfloat types all over your computational code, because it can be used independent of OpenGL. Yet, once you're ready to display the result of the computation, and want to drop the data into a VBO for OpenGL rendering, the type has to be the same as GLfloat.
There are various approaches you can use. One is what I mentioned above, using sized types from standard C++ header files in your non-rendering code. Similarly, you can define your own typedefs that match the types used by OpenGL. Or, less desirable for performance reasons, you can convert the data where necessary, possibly based on comparing the sizeof() values between the incoming types and the GL types.
In the OpenGL specification there are certain parameters which take a set of values of the from GL_OBJECTENUMERATIONi with i ranging from 0 to some number indicated by something like GL_MAX_OBJECT. (Lights being an 'object', as one example.) It seems obvious that the number indicated is to be the upper-range is to be passed through the glGet function providing some indirection.
However, According to a literal interpretation of the OpenGL specification the "texture" parameter for glActiveTexture must be one of GL_TEXTUREi, where i ranges from 0 (GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS - 1) means that the set of accepted constants must be GL_TEXTURE0 to GL_TEXTURE35660 because the constant is a constant of the value 35661.
Language-lawyering aside, this setup means that the subtype can be not only disjoint, but out of order as well, such that the following C-ish mapping would be valid:
#define GL_TEXTURE0 0x84C0
#define GL_TEXTURE1 0x84C1
#define GL_TEXTURE2 0x84C2
#define GL_TEXTURE3 0x84A0
#define GL_TEXTURE4 0x84A4
#define GL_TEXTURE5 0x84A5
#define GL_TEXTURE6 0x84A8
#define GL_TEXTURE7 0x84A2
First, is this an issue actually an issue, or are the constants always laid out as if GL_OBJECTi = GL_OBJECTi-1+1?
If that relationship holds true then there is the possibility of using Ada's subtype feature to avoid passing in invalid parameters...
Ideally, something like:
-- This is an old [and incorrect] declaration using constants.
-- It's just here for an example.
SubType Texture_Number is Enum Range
GL_TEXTURE0..Enum'Max(
GL_MAX_TEXTURE_COORDS - 1,
GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS - 1);
But, if the maximum is dynamically determined then we have to do some monkeying about:
With GL_Constants;
Generic
GL_MAX_TEXTURE : Integer;
-- ...and one of those for EACH maximum for the ranges.
Package Types is
Use GL_Constants;
SubType Texture_Number is Enum Range
GL_TEXTURE0..GL_MAX_TEXTURE;
End Types;
with an instantiation of Package GL_TYPES is new Types( GL_MAX_TEXTURE => glGet(GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS) ); and then using this new GL_TYPES package... a little more work, and a little more cumbersome than straight-out subtyping.
Most of this comes from being utterly new to OpenGL and not fully knowing/understanding it; but it does raise interesting questions as to the best way to proceed in building a good, thick Ada binding.
means that the set of accepted constants must be GL_TEXTURE0 to GL_TEXTURE35660 because the constant is a constant of the value 35661.
No, it doesn't mean this. GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS is a implementation dependent value, that is to be queried at runtime using glGetIntegerv(GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS, out)
Regarding the rest: The OpenGL specification states, that GL_TEXTURE = GL_TEXTURE0 + i, and similar for all other object types, with i < n where n is some reasonable number.
This is one of those situations where I don't think getting extra-sexy with the types buys you a whole lot.
If you were to just make a special integer type for GL_TEXTURE (type GL_TEXTURE is 16#84C0# .. 16#8B4C#;), and use that type for all parameters looking for GL Textures, the compiler would prevent the user from doing math between those and other integer objects. That would probably be plenty. It is certianly way better than what the poor C/C++ coders are stuck with!
Then again, I've never been a proponent of super-thick Ada bindings. Ada bindings should be used to make the types more Ada-like, and to convert C error codes into exceptions. If there are other ways to save the user a bit of work, go ahead and do it. However, do not abstract away any of the power of the API!
There were multiple questions in the comments about my choice of using a separate numeric type rather than an Integer subtype.
It is in fact a common Ada noob mistake to start making yourself custom numeric types when integer subtypes will do, and then getting annoyed at all the type conversions you have to do. The classic example is someone making a type for velocity, then another type for distance, then another for force, and then finding they have to do a type conversion on every single damn math operation.
However, there are times when custom numeric types are called for. In particular, you want to use a custom numeric type whenever objects of that type should live in a separate type universe from normal integers. The most common occurrance of this is happens in API bindings, where the number in question is actually a C-ish designation for some resource. The is the exact situation we have here. The only math you will ever want to do on GL_Textures is comparision with the type's bounds, and simple addtion and subtraction by a literal amount. (Most likely GL_Texture'next() will be sufficient.)
As a huge bonus, making it a custom type will prevent the common error of plugging a GL_Texture value into the wrong parameter in the API call. C API calls do love their ints...
In fact, if it were reasonable to sit and type them all in, I suspect you'd be tempted to just make the thing an enumeration. That'd be even less compatible with Integer without conversions, but nobody here would think twice about it.
OK, first rule you need to know about OpenGL: whenever you see something that says, "goes from X to Y", and one of those values is a GL_THINGY, they are not talking about the numeric value of GL_THINGY. They are talking about an implementation-dependent value that you query with GL_THINGY. This is typically an integer, so you use some form of glGetIntegerv to query it.
Next:
this setup means that the subtype can be not only disjoint, but out of order as well, such that the following C-ish mapping would be valid:
No, it wouldn't.
Every actual enumerator in OpenGL is assigned a specific value by the ARB. And the ARB-assigned values for the named GL_TEXTURE''i'' enumerators are:
#define GL_TEXTURE0 0x84C0
#define GL_TEXTURE1 0x84C1
#define GL_TEXTURE2 0x84C2
#define GL_TEXTURE3 0x84C3
#define GL_TEXTURE4 0x84C4
#define GL_TEXTURE5 0x84C5
#define GL_TEXTURE6 0x84C6
#define GL_TEXTURE7 0x84C7
#define GL_TEXTURE8 0x84C8
Notice how they are all in a sequential ordering.
As for the rest, let me quote you from the OpenGL 4.3 specification on glActiveTexture:
An INVALID_ENUM error is generated if an invalid texture is specified. texture is a symbolic constant of the form TEXTURE''i'', indicating that texture unit ''i'' is to be modified. The constants obey TEXTURE''i'' = TEXTURE0 + ''i'' where ''i'' is in the range 0 to ''k'' - 1, and ''k'' is the value of MAX_COMBINED_TEXTURE_IMAGE_UNITS).
If you're creating a binding in some language, the general idea is this: ''don't strongly type certain values''. This one in particular. Just take whatever the user gives you and pass it along. If the user gets an error, they get an error.
Better yet, expose a more reasonable version of glActiveTexture that takes a ''integer'' instead of an enumerator and do the addition yourself.
I'm going deeper on OpenGL texture mipmapping.
I noticed in the specification that mipmap levels less than zero and greater than log2(maxSize) + 1 are allowed.
Effectively TexImage2D doesn't specify errors for level parameter. So... Probably those mipmaps are not accessed automatically using the standard texture access routines...
How could be effectively used this feature?
For the negative case, the glTexImage2D's man page says:
GL_INVALID_VALUE is generated if level is less than 0.
For the greater than log2(maxsize) case, the specification says what happens to those levels in Raterization/Texturing/Texture Completeness. The short of it is that, yes, they are ignored.