How many mipmaps does a texture have in OpenGL - opengl

Nevermind that I'm the one who created the texture in the first place and I should know perfectly well how many mipmaps I loaded/generated for it. I'm doing this for a unit test. There doesn't seem to be a glGetTexParameter parameter to find this out. The closest I've come is something like this:
int max_level;
glGetTexParameter( GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, &max_level );
int max_mipmap = -1;
for ( int i = 0; i < max_level; ++i )
{
int width;
glGetTexLevelParameter( GL_TEXTURE_2D, i, GL_TEXTURE_WIDTH, &width );
if ( 0 == width )
{
max_mipmap = i-1;
break;
}
)
Anyhow, glGetTexLevelParameter() will return 0 width for a nonexistent mipmap if I'm using an NVidia GPU, but with Mesa, it returns GL_INVALID_VALUE, which leads me to believe that this is very much the Wrong Thing To Do.
How do I find out which mipmap levels I've populated a texture with?

The spec is kinda fuzzy on this. It says that you will get GL_INVALID_VALUE if the level parameter is "larger than the maximum allowable level-of-detail". Exactly how this is defined is not stated.
The documentation for the function clears it up a bit, saying that it is the maximum possible number of LODs for the largest possible texture (GL_MAX_TEXTURE_SIZE). Other similar functions like the glFramebufferTexture family explicitly state this as the limit for GL_INVALID_VALUE. So I would expect that.
Therefore, Mesa has a bug. However, you could work around this by assuming that either 0 or a GL_INVALID_VALUE error means you've walked off the end of the mipmap array.
That being said, I would suggest employing glTexStorage and never having to even ask the question again. This will forcibly prevent someone from setting MAX_LEVEL to a value that's too large. It's pretty new, from GL 4.2, but it's implemented (or will be very soon) across all non-Intel hardware that's still being supported.

It looks like there is currently no way to query how many mipmap levels a texture has, short of the OPs trial/error with #NicolBolas' invalid value check. For most cases I guess its performance wouldn't matter if the level 0 size doesn't change often.
However, assuming the texture does not have a limited number of levels, the specs give the preferred calculation (note the use of floor, and not ceiling as some examples give):
numLevels = 1 + floor(log2(max(w, h, d)))
What is the dimension reduction rule for each successively smaller mipmap level?
Each successively smaller mipmap level is half the size of the previous level, but if this half value is a fractional value, you should round down to the next largest integer.
...
Note that this extension is compatible with supporting other rules because it merely relaxes the error and completeness conditions for mipmaps. At the same time, it makes sense to provide developers a single consistent rule since developers are unlikely to want to generate mipmaps for different rules unnecessarily. One reasonable rule is sufficient and preferable, and the "floor" convention is the best choice.
[ARB_texture_non_power_of_two]
This can of course be verified with the OPs method, or in my case when I received a GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT with glFramebufferTexture2D(..., numLevels).

Assuming you're building mipmaps in a standard way, the number of unique images will be something like ceil(log_2(max(width,height)))+1. This can be easily derived by noticing that mipmaps reduce image size by a factor of two each time until there is a single pixel.

Related

Does OpenGL have a default value for glStencilMask?

For interests sake, I'm curious if glStencilMask and glStencilMaskSeparate (and similar ones) have a default value, or if they're implementation defined, or if they're undefined.
I assume the wise thing to do is always set them from the get go, but I'm curious if they just "work" by coincidence or whether there is in fact a default value set.
Slightly related: I recall reading somewhere that on nvidia cards you don't have to set the active texture and it's at zero by default, but AMD cards require you to set it or else you can get junk results. This makes me wonder if it's the same thing (where stencil stuff just happens to work for me but just by chance) and by not setting it I've been playing a dangerous game or if this isn't the case.
I looked through the OpenGL spec [section 17.4.2] for the definitions of these functions, but couldn't resolve the answer to my question.
The initial state of glStencilMask is clearly specified. Initially, the mask is all 1's.
OpenGL 4.6 API Core Profile Specification - 17.4.2 Fine Control of Buffer Updates; page 522:
void StencilMask( uint mask );
void StencilMaskSeparate( enum face, uint mask );
control the writing of particular bits into the stencil planes.
The least significant s bits of mask, where s is the number of bits in the stencil buffer, specify an integer mask. Where a 1 appears in this mask, the corresponding bit in the stencil buffer is written; where a 0 appears, the bit is not written.
[...]
In the initial state, the integer masks are all ones, as are the bits controlling depth
value and RGBA component writing.

Storing OpenGL color attachments in constexpr GLenum array

I am using the following constexpr GLenum array to represent GL_COLOR_ATTACHMENTx (where x is an unsigned integer between 0 and 7):
constexpr std::array<GLenum, 8> opengl_color_attachment{GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2, GL_COLOR_ATTACHMENT3, GL_COLOR_ATTACHMENT4, GL_COLOR_ATTACHMENT5, GL_COLOR_ATTACHMENT6, GL_COLOR_ATTACHMENT7};
This works fine for only the first eight available color attachments (which the OpenGL specification states to be the minimum). However, there is a possibility of more attachments, which is implementation defined. As the macro GL_MAX_COLOR_ATTACHMENTS represents the number of attachments available, I wanted to edit this constexpr array to include ALL available attachments to the limit, instead of the minimum of 8.
I created the following macro in an attempt to solve this issue myself:
#define OPENGL_COLOR_ATTACHMENT(x) GL_COLOR_ATTACHMENT##x
Although I wanted to use this in a constexpr function to create the array in compile-time, it failed because preprocessor macros are obviously processed before compilation. Although the OpenGL standard guarantees that GL_TEXTURE1 == GL_TEXTURE0 + 1, I could not find such a reference for this macro, so I am unsure whether they are sequential in this case.
Is there a way for me to create the constexpr array fully from GL_COLOR_ATTACHMENT0 to GL_COLOR_ATTACHMENTx where x = GL_MAX_COLOR_ATTACHMENTS, with or without preprocessor macros?
As has been established, you cannot effectively use more than 32 attachments, because glFramebufferTexture doesn't accept anything except an enumerator. GL_COLOR_ATTACHMENT0 + 32 just so happens to be equal to GL_DEPTH_ATTACHMENT, so obviously the implementation cannot tell the difference between using a texture as the 33rd color attachment and as a depth attachment. It will assume the latter.
So really, just make an array of 32 attachments and move on. Or just use GL_COLOR_ATTACHMENT0 + i, where i is less than 32. The enumerators in the specification are indeed sequential; it's just that, unlike texture unit enums, nobody left any space for more than 32. You can even make a constexpr function to generate such values if you want.

Is the number of color attachments bounded by API

The OpenGL specification requires that a framebuffer supports at least 8 color attachments. Now, OpenGL uses compile-time constants (at least on my system), for stuff like GL_COLOR_ATTACHMENTi and GL_DEPTH_ATTACHMENT attachment follows 32 units after GL_COLOR_ATTACHMENT0. Doesn't this mean that regardless of how beefy the hardware is, it will never be possible to use more than 32 color attachments? To clarify, this compiles perfectly with GLEW on Ubuntu 16.04:
static_assert(GL_COLOR_ATTACHMENT0 + 32==GL_DEPTH_ATTACHMENT,"");
and since it is static_assert, this would be true for any hardware configuration (unless the driver installer modify the header files, which would result in non-portable binaries). Wouldn't separate functions for different attachment classes would have been better as it removes the possibility of colliding constants?
It is important to note the difference in spec language. glActiveTexture says this about its parameter:
An INVALID_ENUM error is generated if an invalid texture is specified.
texture is a symbolic constant of the form TEXTUREi, indicating that texture unit i is to be modified. Each TEXTUREi adheres to TEXTUREi = TEXTURE0 + i, where i is in the range zero to k−1, and k is the value of MAX_COMBINED_TEXTURE_IMAGE_UNITS
This text explicitly allows you to compute the enum value, explaining exactly how to do so and what the limits are.
Compare this to what it says about glFramebufferTexture:
An INVALID_ENUM error is generated if attachment is not one of the attachments in table 9.2, and attachment is not COLOR_ATTACHMENTm where m is greater than or equal to the value of MAX_COLOR_ATTACHMENTS.
It looks similar. But note that it doesn't have the language about the value of those enumerators. There's nothing in that description about COLOR_ATTACHMENTm = COLOR_ATTACHMENT0 + m.
As such, it is illegal to use any value other than those specific enums. Now yes, the spec does guarantee elsewhere that COLOR_ATTACHMENTm = COLOR_ATTACHMENT0 + m. But because the guarantee isn't in that section, that section explicitly prohibits the use of any value other than an actual enumerator. Regardless of how you compute it, the result must be an actual enumerator.
So to answer your question, at present, there are only 32 color attachment enumerators. Therefore, MAX_COLOR_ATTACHMENT has an effective maximum value of 32.
The OpenGL 4.5 spec states in Section 9.2:
... by the framebuffer attachment points named COLOR_ATTACHMENT0 through COLOR_ATTACHMENTn. Each COLOR_ATTACHMENTi adheres to COLOR_ATTACHMENTi = COLOR_ATTACHMENT0 + i
and as a footnote
The header files define tokens COLOR_ATTACHMENTi for i in the range [0, 31]. Most implementations support fewer than 32 color attachments, and it is an INVALID_OPERATION error to pass an unsupported attachment name to a command accepting color attachment names.
My interpretation of this is, that it is (as long as the hardware supports it) perfectly fine to use COLOR_ATTACHMENT0 + 32 and so on to address more than 32 attachment points. So there is no real limitation of supported color attachments, just the constants are not defined directly. Why it was designed that way can only be answered by people from the khronos group.

OpenGL mipmapping: level outside the range?

I'm going deeper on OpenGL texture mipmapping.
I noticed in the specification that mipmap levels less than zero and greater than log2(maxSize) + 1 are allowed.
Effectively TexImage2D doesn't specify errors for level parameter. So... Probably those mipmaps are not accessed automatically using the standard texture access routines...
How could be effectively used this feature?
For the negative case, the glTexImage2D's man page says:
GL_INVALID_VALUE is generated if level is less than 0.
For the greater than log2(maxsize) case, the specification says what happens to those levels in Raterization/Texturing/Texture Completeness. The short of it is that, yes, they are ignored.

GLSL break command

Currently I am learning how to create shaders in GLSL for a game engine I am working on, and I have a question regarding the language which puzzles me. I have learned that in shader versions lower than 3.0 you cannot use uniform variables in the condition of a loop. For example the following code would not work in shader versions older than 3.0.
for (int i = 0; i < uNumLights; i++)
{
...............
}
But isn't it possible to replace this with a loop with a fixed amount of iterations, but containing a conditional statement which would break the loop if i, in this case, is greater than uNumLights?. Ex :
for (int i = 0; i < MAX_LIGHTS; i++)
{
if(i >= uNumLights)
break;
..............
}
Aren't these equivalent? Should the latter work in older versions GLSL? And if so, isn't this more efficient and easy to implement than other techniques that I have read about, like using a different version of the shader for different number of lights?
I know this might be a silly question, but I am a beginner and I cannot find a reason why this shouldn't work.
GLSL can be confusing insofar as for() suggests to you that there must be conditional branching, even when there isn't because the hardware is unable to do it at all (which applies to if() in the same way).
What really happens on pre-SM3 hardware is that the HAL inside your OpenGL implementation will completely unroll your loop, so there is actually no jump any more. And, this explains why it has difficulties doing so with non-constants.
While technically possible to do it with non-constants anyway, the implementation would have to recompile the shader every time you change that uniform, and it might run against the maximum instruction count if you're just allowed to supply any haphazard number.
That is a problem because... what then? That's a bad situation.
If you supply a too big constant, it will give you a "too many instructions" compiler error when you build the shader. Now, if you supply a silly number in an uniform, and the HAL thus has to produce new code and runs against this limit, what can OpenGL do?
You most probably validated your program after compiling and linking, and you most probably queried the shader info log, and OpenGL kept telling you that everything was fine. This is, in some way, a binding promise, it cannot just decide otherwise all of a sudden. Therefore, it must make sure that this situation cannot arise, and the only workable solution is to not allow uniforms in conditions on hardware generations that don't support dynamic branching.
Otherwise, there would need to be some form of validation inside glUniform that rejects bad values. However, since this depends on successful (or unsuccessful) shader recompilation, this would mean that it would have to run synchronously, which makes it a "no go" approach. Also, consider that GL_ARB_uniform_buffer_object is exposed on some SM2 hardware (for example GeForce FX), which means you could throw a buffer object with unpredictable content at OpenGL and still expect it to work somehow! The implementation would have to scan the buffer's memory for invalid values after you unmap it, which is insane.
Similar to a loop, an if() statement does not branch on SM2 hardware, even though it looks like it. Instead, it will calculate both branches and do a conditional move.
(I'm assuming you are talking about pixel shaders).
Second variant is going to work only on gpu which supports shader model >= 3. Because dynamic branching (such as putting variable uNumLights into IF condition) is not supported on gpu shader model < 3 either.
Here you can compare what is and isn't supported between different shader models.
There is a fun work around I just figured out. Seems stupid and I can't promise you that it's a healthy choice, but it appears to work for me right now:
Set your for loop to the maximum you allow. Put a condition inside the loop to skip over the heavy routines, if the count goes beyond your uniform value.
uniform int iterations;
for(int i=0; i<10; i++){
if(i<iterations){
//do your thing...
}
}