Does OpenGL have a default value for glStencilMask? - opengl

For interests sake, I'm curious if glStencilMask and glStencilMaskSeparate (and similar ones) have a default value, or if they're implementation defined, or if they're undefined.
I assume the wise thing to do is always set them from the get go, but I'm curious if they just "work" by coincidence or whether there is in fact a default value set.
Slightly related: I recall reading somewhere that on nvidia cards you don't have to set the active texture and it's at zero by default, but AMD cards require you to set it or else you can get junk results. This makes me wonder if it's the same thing (where stencil stuff just happens to work for me but just by chance) and by not setting it I've been playing a dangerous game or if this isn't the case.
I looked through the OpenGL spec [section 17.4.2] for the definitions of these functions, but couldn't resolve the answer to my question.

The initial state of glStencilMask is clearly specified. Initially, the mask is all 1's.
OpenGL 4.6 API Core Profile Specification - 17.4.2 Fine Control of Buffer Updates; page 522:
void StencilMask( uint mask );
void StencilMaskSeparate( enum face, uint mask );
control the writing of particular bits into the stencil planes.
The least significant s bits of mask, where s is the number of bits in the stencil buffer, specify an integer mask. Where a 1 appears in this mask, the corresponding bit in the stencil buffer is written; where a 0 appears, the bit is not written.
[...]
In the initial state, the integer masks are all ones, as are the bits controlling depth
value and RGBA component writing.

Related

Is the number of color attachments bounded by API

The OpenGL specification requires that a framebuffer supports at least 8 color attachments. Now, OpenGL uses compile-time constants (at least on my system), for stuff like GL_COLOR_ATTACHMENTi and GL_DEPTH_ATTACHMENT attachment follows 32 units after GL_COLOR_ATTACHMENT0. Doesn't this mean that regardless of how beefy the hardware is, it will never be possible to use more than 32 color attachments? To clarify, this compiles perfectly with GLEW on Ubuntu 16.04:
static_assert(GL_COLOR_ATTACHMENT0 + 32==GL_DEPTH_ATTACHMENT,"");
and since it is static_assert, this would be true for any hardware configuration (unless the driver installer modify the header files, which would result in non-portable binaries). Wouldn't separate functions for different attachment classes would have been better as it removes the possibility of colliding constants?
It is important to note the difference in spec language. glActiveTexture says this about its parameter:
An INVALID_ENUM error is generated if an invalid texture is specified.
texture is a symbolic constant of the form TEXTUREi, indicating that texture unit i is to be modified. Each TEXTUREi adheres to TEXTUREi = TEXTURE0 + i, where i is in the range zero to k−1, and k is the value of MAX_COMBINED_TEXTURE_IMAGE_UNITS
This text explicitly allows you to compute the enum value, explaining exactly how to do so and what the limits are.
Compare this to what it says about glFramebufferTexture:
An INVALID_ENUM error is generated if attachment is not one of the attachments in table 9.2, and attachment is not COLOR_ATTACHMENTm where m is greater than or equal to the value of MAX_COLOR_ATTACHMENTS.
It looks similar. But note that it doesn't have the language about the value of those enumerators. There's nothing in that description about COLOR_ATTACHMENTm = COLOR_ATTACHMENT0 + m.
As such, it is illegal to use any value other than those specific enums. Now yes, the spec does guarantee elsewhere that COLOR_ATTACHMENTm = COLOR_ATTACHMENT0 + m. But because the guarantee isn't in that section, that section explicitly prohibits the use of any value other than an actual enumerator. Regardless of how you compute it, the result must be an actual enumerator.
So to answer your question, at present, there are only 32 color attachment enumerators. Therefore, MAX_COLOR_ATTACHMENT has an effective maximum value of 32.
The OpenGL 4.5 spec states in Section 9.2:
... by the framebuffer attachment points named COLOR_ATTACHMENT0 through COLOR_ATTACHMENTn. Each COLOR_ATTACHMENTi adheres to COLOR_ATTACHMENTi = COLOR_ATTACHMENT0 + i
and as a footnote
The header files define tokens COLOR_ATTACHMENTi for i in the range [0, 31]. Most implementations support fewer than 32 color attachments, and it is an INVALID_OPERATION error to pass an unsupported attachment name to a command accepting color attachment names.
My interpretation of this is, that it is (as long as the hardware supports it) perfectly fine to use COLOR_ATTACHMENT0 + 32 and so on to address more than 32 attachment points. So there is no real limitation of supported color attachments, just the constants are not defined directly. Why it was designed that way can only be answered by people from the khronos group.

What exactly does glStencilMask() do? [duplicate]

This question already has an answer here:
How does mask affect stencil value according to stencil op?
(1 answer)
Closed 7 years ago.
I'm a beginner at OpenGL and while learning about stenciling this one function has been troubling me (glStencilMask).
I've been told that it can be used to enable or disable stenciling, how is this?
Why are hexadecimal values passed into this function?
Why are is the HEX value 0xff and 0x00 often passed specifically?
Does this function prevent drawing to the color buffer and/or the stencil buffer?
Would you kindly explain what its doing in simple terms?
Do you know how bitmasks work? That is what this is.
0xff is 11111111 in binary. That means GL can write to all 8 of the stencil bits.
0x00 is 00000000 in binary, and GL is not allowed to write to any bits when this mask is used.
Since the stencil buffer is effectively one large bitwise machine, it would serve you well to brush up on or learn these concepts in detail. If you are having trouble understanding why you would want to mask off certain bits, you may not be able to make effective use of the stencil buffer.
Masking off certain bits between passes will let you preserve the results stored in parts of the stencil buffer. Why you would want this is entirely application-specific, but this is how the stencil buffer works.
The stencil mask never disables the stencil buffer completely, you'd actually have to call glDisable (GL_STENCIL_TEST) for that. It simply enables or disables writes to portions of it.
On a final note, if you disable GL_STENCIL_TEST or GL_DEPTH_TEST that actually does two things:
Disables the test
Disables writing stencil / depth values
So, if for some reason, you ever wanted to write a constant depth or stencil value and you assumed that disabling the test would accomplish that -- it won't. Use GL_ALWAYS for the test function instead of disabling the test if that is your intention.

OpenGL - How is GLenum a unsigned 32 bit Integer?

To begin there are 8 types of Buffer Objects in OpenGL:
GL_ARRAY_BUFFER​
GL_ELEMENT_ARRAY_BUFFER​
GL_COPY_READ_BUFFER
...
They are enums, or more specifically GLenum's. Where GLenum is a unsigned 32 bit integer that has values up to ~ 4,743,222,432 so to say.
Most of the uses of buffer objects involve binding them to a certain target like this: e.g.
glBindBuffer (GL_ARRAY_BUFFER, Buffers [size]);
[void glBindBuffer (GLenum target, GLuint buffer)] documentation
My question is - is that if its an enum its only value must be 0,1,2,3,4..7 respectively so why go all the way and make it a 32 bit integer if it has only values up to 7? Pardon my knowledge of CS and OpenGL, it just seems unethical.
Enums aren't used just for the buffers - but everywhere a symbolic constant is needed. Currently, several thousand enum values are assigned (look into your GL.h and the latest glext.h. Note that vendors get allocated their official enum ranges so they can implement vendor-specific extensions wihtout interfering with others - so a 32Bit enum space is not a bad idea. Furthermore, on modern CPU architechtures, using less than 32Bit won't be any more efficient, so this is not a problem performance-wise.
UPDATE:
As Andon M. Coleman pointed out, currently only 16Bit enumerant ranges are beeing allocated. It might be useful to link at the OpenGL Enumerant Allocation Policies, which also has the following remark:
Historically, enumerant values for some single-vendor extensions were allocated in blocks of 1000, beginning with the block [102000,102999] and progressing upward. Values in this range cannot be represented as 16-bit unsigned integers. This imposes a significant and unnecessary performance penalty on some implementations. Such blocks that have already been allocated to vendors will remain allocated unless and until the vendor voluntarily releases the entire block, but no further blocks in this range will be allocated.
Most of these seem to have been removed in favor of 16 Bit values, but 32 Bit values have been in use. In the current glext.h, one still can find some (obsolete) enumerants above 0xffff, like
#ifndef GL_PGI_misc_hints
#define GL_PGI_misc_hints 1
#define GL_PREFER_DOUBLEBUFFER_HINT_PGI 0x1A1F8
#define GL_CONSERVE_MEMORY_HINT_PGI 0x1A1FD
#define GL_RECLAIM_MEMORY_HINT_PGI 0x1A1FE
...
Why would you use a short anyway? What situation would you ever be in that you would even save more than 8k ram (if the reports of near a thousand GLenums is correct) by using a short or uint8_t istead of GLuint for enums and const declarations? Considering the trouble of potential hardware incompatibilities and potential cross platform bugs you would introduce, it's kind of odd to try to save something like 8k ram even in the context of the original 2mb Voodoo3d graphics hardware, much less SGL super-computer-farms OpenGL was created for.
Besides, modern x86 and GPU hardware aligns on 32 or 64 bits at a time, you would actually stall the operation of the CPU/GPU as 24 or 56 bits of the register would have to be zeroed out and THEN read/written to, whereas it could operate on the standard int as soon as it was copied in. From the start of OpenGL compute resources have tended to be more valuable than memory while you might do billions of state changes during a program's life you'd be saving about 10kb (kilobytes) of ram max if you replaced every 32 bit GLuint enum with a uint8_t one. I'm trying so hard not to be extra-cynical right now, heh.
For example, one valid reason for things like uint18_t and the like is for large data buffers/algorithms where data fits in that bit-depth. 1024 ints vs 1024 uint8_t variables on the stack is 8k, are we going to split hairs over 8k? Now consider a 4k raw bitmap image of 4000*2500*32 bits, we're talking a few hundred megs and it would be 8 times the size if we used 64 bit RGBA buffers in the place of standard 8 bit RGBA8 buffers, or quadruple in size if we used 32 bit RGBA encoding. Multiply that by the number of textures open or pictures saved and swapping a bit of cpu operations for all that extra memory makes sense, especially in the context of that type of work.
That is where using a non standard integer type makes sense. Unless you're on a 64k machine or something (like an old-school beeper, good luck running OpenGL on that) system if you're trying to save a few bits of memory on something like a const declaration or reference counter you're just wasting everyone's time.

How many mipmaps does a texture have in OpenGL

Nevermind that I'm the one who created the texture in the first place and I should know perfectly well how many mipmaps I loaded/generated for it. I'm doing this for a unit test. There doesn't seem to be a glGetTexParameter parameter to find this out. The closest I've come is something like this:
int max_level;
glGetTexParameter( GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, &max_level );
int max_mipmap = -1;
for ( int i = 0; i < max_level; ++i )
{
int width;
glGetTexLevelParameter( GL_TEXTURE_2D, i, GL_TEXTURE_WIDTH, &width );
if ( 0 == width )
{
max_mipmap = i-1;
break;
}
)
Anyhow, glGetTexLevelParameter() will return 0 width for a nonexistent mipmap if I'm using an NVidia GPU, but with Mesa, it returns GL_INVALID_VALUE, which leads me to believe that this is very much the Wrong Thing To Do.
How do I find out which mipmap levels I've populated a texture with?
The spec is kinda fuzzy on this. It says that you will get GL_INVALID_VALUE if the level parameter is "larger than the maximum allowable level-of-detail". Exactly how this is defined is not stated.
The documentation for the function clears it up a bit, saying that it is the maximum possible number of LODs for the largest possible texture (GL_MAX_TEXTURE_SIZE). Other similar functions like the glFramebufferTexture family explicitly state this as the limit for GL_INVALID_VALUE. So I would expect that.
Therefore, Mesa has a bug. However, you could work around this by assuming that either 0 or a GL_INVALID_VALUE error means you've walked off the end of the mipmap array.
That being said, I would suggest employing glTexStorage and never having to even ask the question again. This will forcibly prevent someone from setting MAX_LEVEL to a value that's too large. It's pretty new, from GL 4.2, but it's implemented (or will be very soon) across all non-Intel hardware that's still being supported.
It looks like there is currently no way to query how many mipmap levels a texture has, short of the OPs trial/error with #NicolBolas' invalid value check. For most cases I guess its performance wouldn't matter if the level 0 size doesn't change often.
However, assuming the texture does not have a limited number of levels, the specs give the preferred calculation (note the use of floor, and not ceiling as some examples give):
numLevels = 1 + floor(log2(max(w, h, d)))
What is the dimension reduction rule for each successively smaller mipmap level?
Each successively smaller mipmap level is half the size of the previous level, but if this half value is a fractional value, you should round down to the next largest integer.
...
Note that this extension is compatible with supporting other rules because it merely relaxes the error and completeness conditions for mipmaps. At the same time, it makes sense to provide developers a single consistent rule since developers are unlikely to want to generate mipmaps for different rules unnecessarily. One reasonable rule is sufficient and preferable, and the "floor" convention is the best choice.
[ARB_texture_non_power_of_two]
This can of course be verified with the OPs method, or in my case when I received a GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT with glFramebufferTexture2D(..., numLevels).
Assuming you're building mipmaps in a standard way, the number of unique images will be something like ceil(log_2(max(width,height)))+1. This can be easily derived by noticing that mipmaps reduce image size by a factor of two each time until there is a single pixel.

OpenGL mipmapping: level outside the range?

I'm going deeper on OpenGL texture mipmapping.
I noticed in the specification that mipmap levels less than zero and greater than log2(maxSize) + 1 are allowed.
Effectively TexImage2D doesn't specify errors for level parameter. So... Probably those mipmaps are not accessed automatically using the standard texture access routines...
How could be effectively used this feature?
For the negative case, the glTexImage2D's man page says:
GL_INVALID_VALUE is generated if level is less than 0.
For the greater than log2(maxsize) case, the specification says what happens to those levels in Raterization/Texturing/Texture Completeness. The short of it is that, yes, they are ignored.