OpenGL: Clear the Stencil buffer, except for certain bits? - c++

I'm using the stencil buffer for two jobs. The first is to allow masking to happen, and the second is to write masks for objects that can be 'seen' through. In this particular case, the reserved bit is 0x80, the very last bit in the byte, with the rest left for regular masking.
The first purpose requires that the buffer be cleared after around 127 uses, or else past stencils will become "valid" again when testing, since the value must wrap back to 1. The second purpose requires the reserved bits in the buffer to stay alive through the entire frame.
Is there any way to clear the entire stencil buffer, while keeping the reserved bits set?

Your theory in the comment is correct. glStencilMask() is applied to the values written by glClear() and glClearBuffer().
From section "17.4.3 Clearing the Buffers" in the OpenGL 4.5 spec (emphasis added):
When Clear is called, the only per-fragment operations that are applied (if enabled) are the pixel ownership test, the scissor test, sRGB conversion (see section17.3.9), and dithering. The masking operations described in section 17.4.2 are also applied.
Where section 17.4.2 is titled "Fine Control of Buffer Updates", and includes the documentation of glStencilMask(). For glStencilMaskSeparate(), which is a more general version of glStencilMask(), it even says explicitly:
Fragments generated by front-facing primitives use the front mask and fragments generated by back-facing primitives use the back mask (see section 17.3.5). The clear operation always uses the front stencil write mask when clearing the stencil buffer.
So to clear the bottom 7 bits of the stencil buffer, you can use:
glStencilMask(0x7f);
glClear(GL_STENCIL_BUFFER_BIT);

Related

Clearing FBO depth buffer without considering glDephMask

I'm currently writing a render pass which renders to a framebuffer with attached color and depth attachments. Before the pass starts, it needs to clear the depth attachment to a set value. I'm trying to accomplish this with glClearNamedFramebufferfv(fbo, GL_DEPTH, 0, &depth_value). However, this seems to only work if glDepthMask was set to true before/not set to 'false'.
I find it a bit weird to have the clear operation depend on global pipeline state (or perhaps I have worked with Vulkan a bit too long before this task), so I'd first like to ask whether this is indeed how it works (the spec says "enable or disable writing into the depth buffer", not just rendering, so it seems to be intended).
The second question would then be if there are alternatives that don't rely/respect this global flag. Since the underlying FBO attachment is a texture and not a renderbuffer, could I use glClearTexImage instead or does this also respect glDepthMask? Are there performance costs when clearing a texture like this instead of via the framebuffer?
Thank you in advance
Yes, the OpenGL standard explicitly requires that masks are applied to framebuffer clearing operations. This is still true for the DSA-style functions. This is mainly because they inherit their functionality from the non-DSA equivalents. glClearNamedFramebufferfv is conceptually equivalent to glClearBufferfv, and that function uses masks. Therefore, the "named" equivalent uses masks.
Texture clearing does not use the mask state. However, it is also not a framebuffer operation. As such, it is possible that this clear will be performed in a more inefficient manner than using a proper framebuffer clear.
So it'd be better to just reset the mask first.

Why sometimes we use specific bits to do stencil test?

I am not clear why sometimes choose some specific bits of the stencil buffer to do stencil test. I cannot find examples like only test 1,3,5 bits of one stencil buffer.
The reason is not especially interesting. The stencil buffer usually contains 8 bits per sample, and you're free to use those 8 bits however you like in your application. So the meaning of those bits is up to you.
Often they're used for doing volume intersection tests, such as shadow volumes for stencil shadows (a technique popular circa 2005), where you might use the stencil buffer as a counter. Another example is deferred lighting, where you use a single bit in the stencil buffer to track which pixels are affected by a particular light.
So if you store "this pixel is affected by light #3" in bit 1, then you test bit 1 when you're rendering light #3. It's all up to the application developer.

Do we need to clear the buffer if we use double buffering?

Let say, we use double buffering. We first write the frame into the back buffer, then it will be swap into the front buffer to be displayed.
There are 2 scenario here, which I assume have the same outcome.
Assume we clear the back buffer, we then write a new frame to back buffer. Swap it into the front buffer.
Now assume we didn't clear the back buffer, the back buffer will be overwritten with a new frame anyway. Lastly both buffer will be swapped.
Thus, assuming I was right and provided we use double buffering, whether clearing or not clearing buffer, both will then end up with the same display, is that true?
Will there be any possible rendering artifacts, if we didn't clear the buffer?
The key of the second approach is in this assumption:
the back buffer will be overwritten with a new frame
I assume we are talking about OpenGL frame buffer which contains Color values, Depth, Stencil and etc. How exactly will they be overwritten in next frame?
Rendering code does constant depth comparisons, so see which objects need to be drawn. With old frame depth data it will be all messed up. Same happens if you render any semi-transparent items, with blending enabled.
Clearing buffer is fastest way to reset everything to ground zero (or any other specific value you need).
There are techniques that rely on buffer not being cleared (considering this is verified to be a costly operation on a platform). For example not having transparent geometry without opaque behind it, and toggling depth-test less/greater in 0-0.5 / 0.5-1.0 ranges to make it always overwrite old frames values.
During rendering you depend on the fact that at least the depth buffer is cleared.
When double buffering the value of the back buffer will (possibly) be that what you rendered 2 frames ago.
If the depth buffer is not cleared then that wall you planted your face on will never go away.
The depth buffer can be cleared by for example rendering a full screen quad textured with your skybox while the depth test is disabled.
Clearing of the buffers is absolutely essential if you like performance on modern hardware. Clearing buffers doesn't necessarily write to memory. It instead does some cache magic such that, whenever the system tries to read from the memory (if it hasn't been written to since it was cleared), it will read the clear color. So it won't even really access that memory.
This is very important for things like the depth buffer. Depth tests/writes are a read/modify/write operation. The first read will essentially be free.
So while you do not technically need to clear the back buffers if you're going to overwrite every pixel, you really should.
After buffer swap the contents of the back buffer are undefined, i.e. they could be anything. Since many OpenGL rendering operations depend on a well known state of the destination frame buffer to work properly (depth testing, stencil testing, blending) the back buffer has to be brought into a well known state before doing anything else.
Hence, unless you take carefull measures to make sure your rendering operations do not depend on destination buffer contents, you'll have to clear the back buffer after a swap before doing anything else.

OpenGL: Acquiring only a stencil buffer and no depth buffer?

I would like to acquire a stencil buffer, but not suffer the overhead of an attached depth buffer if it's possible, since I wouldn't be using it. Most of the resources I've found suggest that while the stencil buffer is optional (excluding it in favour of gaining more depth buffer precision, for example) I have not seen any code that requests and successfully gets only the 8-bit stencil buffer. The most common configuration I've seen being 24 bit depth buffers with an 8 bit stencil buffer.
Is it possible to request only a stencil buffer with a color buffer?
If it is possible, Is it likely the request would be granted by most OpenGL implementations?
The OpenGL version I'm using is 2.0
edit:
The API I'm using to call OpenGL is SFML, which normally doesn't support stencil allocation for it's FBO wrapper objects, though it allows it for the display surface's framebuffer. I edited the functionality in myself, though that's where I'm stuck.
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH24_STENCIL8_EXT, width, height));
This line decides the storage type I assume. However, GL_DEPTH24_STENCIL8_EXT is the only define I've found that specifies a stencil buffer's creation. (there's no GL_STENCIL8 or anything similar at least)
Researching GL_STENCIL_INDEX8 that was mentioned in the comments, I came across the following line in the the official OpenGL wiki, http://www.opengl.org/wiki/Framebuffer_Object_Examples#Stencil
NEVER EVER MAKE A STENCIL buffer. All GPUs and all drivers do not support an independent stencil buffer. If you need a stencil buffer, then you need to make a Depth=24, Stencil=8 buffer, also called D24S8.
Stress testing the two different allocation schemes, GL_STENCIL_INDEX8_EXT vs GL_DEPTH24_STENCIL8_EXT, the results were roughly equal, both in terms of memory usage and performance. I suspect that it padded the stencil buffer with 24bits anyway. So for sake of portability, going to just use the depth and stencil packed scheme.

Stencil Buffer & Stencil Test

according to the books that i've read stencil test is achieved by comparing a reference value with that of the stencil buffer corresponding to a pixel, how ever in one of the books it states that:
A mask is bitwise AND-ed with the value in the stencil planes and with the reference value before the comparison is applied
here i see a third parameter which is the mask, is this a mask related to the stencil buffer or it is another parameter generated by openGL itself??
can someone explain the comparison process and the values that have a role in this process??
glStencilMask (...) is used to enable or disable writing to individual bits in the stencil buffer. To make the number of parameters manageable and accommodate stencil buffers of varying bit-depth, it takes a GLuint instead of individual GLbooleans like glColorMask (...) and glDepthMask (...).
Typically the stencil buffer is 8-bits wide, though it need not be. The default stencil mask is such that every bit-plane is enabled. In an 8-bit stencil buffer, this means the default mask is 0xff (11111111b). Additionally, stenciling can be done separately for front/back facing polygons in OpenGL 2.0+, so there are technically two stencil masks.
In your question, you are likely referring to glStencilFunc (...), which also has a mask. This mask is not associated with the stencil buffer itself, but with the actual stencil test. The principle is the same, however; the above link details how this mask is AND'd together with the ref. value during testing.
A mask is an optional extra that sits between what is rendered and what it sent to be rendered.
Imagine you have a scene being rendered and you suddenly decide that you don't want any red being used by a certain object. You can use the mask to apply a bitwise operation to every pixel that object affects to remove the red values.
r:150 b:50 g:47 becomes r:0 b:50 g:47
r:13 b:255 g:255 becomes r:0 b:255 g:255
etc.
http://www.opengl.org/sdk/docs/man3/xhtml/glColorMask.xml should help explain it a bit more.