What exactly does glStencilMask() do? [duplicate] - c++

This question already has an answer here:
How does mask affect stencil value according to stencil op?
(1 answer)
Closed 7 years ago.
I'm a beginner at OpenGL and while learning about stenciling this one function has been troubling me (glStencilMask).
I've been told that it can be used to enable or disable stenciling, how is this?
Why are hexadecimal values passed into this function?
Why are is the HEX value 0xff and 0x00 often passed specifically?
Does this function prevent drawing to the color buffer and/or the stencil buffer?
Would you kindly explain what its doing in simple terms?

Do you know how bitmasks work? That is what this is.
0xff is 11111111 in binary. That means GL can write to all 8 of the stencil bits.
0x00 is 00000000 in binary, and GL is not allowed to write to any bits when this mask is used.
Since the stencil buffer is effectively one large bitwise machine, it would serve you well to brush up on or learn these concepts in detail. If you are having trouble understanding why you would want to mask off certain bits, you may not be able to make effective use of the stencil buffer.
Masking off certain bits between passes will let you preserve the results stored in parts of the stencil buffer. Why you would want this is entirely application-specific, but this is how the stencil buffer works.
The stencil mask never disables the stencil buffer completely, you'd actually have to call glDisable (GL_STENCIL_TEST) for that. It simply enables or disables writes to portions of it.
On a final note, if you disable GL_STENCIL_TEST or GL_DEPTH_TEST that actually does two things:
Disables the test
Disables writing stencil / depth values
So, if for some reason, you ever wanted to write a constant depth or stencil value and you assumed that disabling the test would accomplish that -- it won't. Use GL_ALWAYS for the test function instead of disabling the test if that is your intention.

Related

Does OpenGL have a default value for glStencilMask?

For interests sake, I'm curious if glStencilMask and glStencilMaskSeparate (and similar ones) have a default value, or if they're implementation defined, or if they're undefined.
I assume the wise thing to do is always set them from the get go, but I'm curious if they just "work" by coincidence or whether there is in fact a default value set.
Slightly related: I recall reading somewhere that on nvidia cards you don't have to set the active texture and it's at zero by default, but AMD cards require you to set it or else you can get junk results. This makes me wonder if it's the same thing (where stencil stuff just happens to work for me but just by chance) and by not setting it I've been playing a dangerous game or if this isn't the case.
I looked through the OpenGL spec [section 17.4.2] for the definitions of these functions, but couldn't resolve the answer to my question.
The initial state of glStencilMask is clearly specified. Initially, the mask is all 1's.
OpenGL 4.6 API Core Profile Specification - 17.4.2 Fine Control of Buffer Updates; page 522:
void StencilMask( uint mask );
void StencilMaskSeparate( enum face, uint mask );
control the writing of particular bits into the stencil planes.
The least significant s bits of mask, where s is the number of bits in the stencil buffer, specify an integer mask. Where a 1 appears in this mask, the corresponding bit in the stencil buffer is written; where a 0 appears, the bit is not written.
[...]
In the initial state, the integer masks are all ones, as are the bits controlling depth
value and RGBA component writing.

OpenGL - How is GLenum a unsigned 32 bit Integer?

To begin there are 8 types of Buffer Objects in OpenGL:
GL_ARRAY_BUFFER​
GL_ELEMENT_ARRAY_BUFFER​
GL_COPY_READ_BUFFER
...
They are enums, or more specifically GLenum's. Where GLenum is a unsigned 32 bit integer that has values up to ~ 4,743,222,432 so to say.
Most of the uses of buffer objects involve binding them to a certain target like this: e.g.
glBindBuffer (GL_ARRAY_BUFFER, Buffers [size]);
[void glBindBuffer (GLenum target, GLuint buffer)] documentation
My question is - is that if its an enum its only value must be 0,1,2,3,4..7 respectively so why go all the way and make it a 32 bit integer if it has only values up to 7? Pardon my knowledge of CS and OpenGL, it just seems unethical.
Enums aren't used just for the buffers - but everywhere a symbolic constant is needed. Currently, several thousand enum values are assigned (look into your GL.h and the latest glext.h. Note that vendors get allocated their official enum ranges so they can implement vendor-specific extensions wihtout interfering with others - so a 32Bit enum space is not a bad idea. Furthermore, on modern CPU architechtures, using less than 32Bit won't be any more efficient, so this is not a problem performance-wise.
UPDATE:
As Andon M. Coleman pointed out, currently only 16Bit enumerant ranges are beeing allocated. It might be useful to link at the OpenGL Enumerant Allocation Policies, which also has the following remark:
Historically, enumerant values for some single-vendor extensions were allocated in blocks of 1000, beginning with the block [102000,102999] and progressing upward. Values in this range cannot be represented as 16-bit unsigned integers. This imposes a significant and unnecessary performance penalty on some implementations. Such blocks that have already been allocated to vendors will remain allocated unless and until the vendor voluntarily releases the entire block, but no further blocks in this range will be allocated.
Most of these seem to have been removed in favor of 16 Bit values, but 32 Bit values have been in use. In the current glext.h, one still can find some (obsolete) enumerants above 0xffff, like
#ifndef GL_PGI_misc_hints
#define GL_PGI_misc_hints 1
#define GL_PREFER_DOUBLEBUFFER_HINT_PGI 0x1A1F8
#define GL_CONSERVE_MEMORY_HINT_PGI 0x1A1FD
#define GL_RECLAIM_MEMORY_HINT_PGI 0x1A1FE
...
Why would you use a short anyway? What situation would you ever be in that you would even save more than 8k ram (if the reports of near a thousand GLenums is correct) by using a short or uint8_t istead of GLuint for enums and const declarations? Considering the trouble of potential hardware incompatibilities and potential cross platform bugs you would introduce, it's kind of odd to try to save something like 8k ram even in the context of the original 2mb Voodoo3d graphics hardware, much less SGL super-computer-farms OpenGL was created for.
Besides, modern x86 and GPU hardware aligns on 32 or 64 bits at a time, you would actually stall the operation of the CPU/GPU as 24 or 56 bits of the register would have to be zeroed out and THEN read/written to, whereas it could operate on the standard int as soon as it was copied in. From the start of OpenGL compute resources have tended to be more valuable than memory while you might do billions of state changes during a program's life you'd be saving about 10kb (kilobytes) of ram max if you replaced every 32 bit GLuint enum with a uint8_t one. I'm trying so hard not to be extra-cynical right now, heh.
For example, one valid reason for things like uint18_t and the like is for large data buffers/algorithms where data fits in that bit-depth. 1024 ints vs 1024 uint8_t variables on the stack is 8k, are we going to split hairs over 8k? Now consider a 4k raw bitmap image of 4000*2500*32 bits, we're talking a few hundred megs and it would be 8 times the size if we used 64 bit RGBA buffers in the place of standard 8 bit RGBA8 buffers, or quadruple in size if we used 32 bit RGBA encoding. Multiply that by the number of textures open or pictures saved and swapping a bit of cpu operations for all that extra memory makes sense, especially in the context of that type of work.
That is where using a non standard integer type makes sense. Unless you're on a 64k machine or something (like an old-school beeper, good luck running OpenGL on that) system if you're trying to save a few bits of memory on something like a const declaration or reference counter you're just wasting everyone's time.

What is the purpose of bitdepth for the several components of the framebuffer in glfwWindowHint function of GLFW3?

I would like to know what are the following "framebuffer related hints" of GLFW3 function glfwWindowHint :
GLFW_RED_BITS
GLFW_GREEN_BITS
GLFW_BLUE_BITS
GLFW_ALPHA_BITS
GLFW_DEPTH_BITS
GLFW_STENCIL_BITS
What is the purpose of this? Usually their default values are enough?
You can use those to request that the OS give you a GL context with at least that many bits of r/g/b/alpha/depth/stencil. It may give you more.
If you don't set explicit values for each hint GLFW will use zeros instead.
Some (most? all?) GL implementations won't give you any alpha/depth/stencil bits unless you specifically ask for them.
You'll almost certainly get some color bits though.

OpenGL ES 2 glGetActiveAtrib and non floats

I'm porting an engine from DX9/10/11 over to OpenGL ES 2. I'm having a bit of a problem with glGetActiveAttrib though.
According to the docs the type returned can only be one of the following:
The symbolic constants GL_FLOAT, GL_FLOAT_VEC2, GL_FLOAT_VEC3,
GL_FLOAT_VEC4, GL_FLOAT_MAT2, GL_FLOAT_MAT3, or GL_FLOAT_MAT4 may be
returned.
This seems t imply that you cannot have an integer vertex attribute? Am I missing something? Does this really mean you HAVE to implement every thing as floats? Does this mean I can't implement a colour as 4 byte values?
If so, this seems very strange as this would be a horrific waste of memory ... if not, can someone explain where I'm going wrong?
Cheers!
Attributes must be declared as floats in GLSL ES shader. But you can pass to them SHORT's or other supported values listed here. The conversion will happen automatically.

What defines those OpenGL render dimension limits?

On my system, anything I draw with OpenGL outside of the range of around (-32700,32700) is not rendered (or folded back into the range, I can't figure out).
What defines those limits? Can they be modified?
Edit: Thanks all for pointing the right direction. It turned out my drawing code was using GLshort values. I replaced those by GLint values and I don't see those limits anymore.
I don't know what exactly you are doing, but this looks like a numeric overflow of a signed 16-bit integer (-32768..32767).
Are you calling glVertex3s to draw your vertices? As Malte Clasen pointed out, your vertices would overflow at 2^15-1.