Non-monotonic depth buffer; cyclical overlapping in OpenGL - opengl

Is it possible to implement custom depth buffer values so that I can implement a non-transitive ordering? E.g.
Red > Green
Green > Blue
Blue > Red
Which I imagine would be implemented in the fragment shader or something, only writing the pixel if it's on top of the existing one as per the schema above. Note that since it's non-transitive it's impossible to assign a single numeric value to each of the colors and retain the ordering.
Put another way, I would like a custom z-buffer where each element is of a non-transitive type. So the z-buffer would be a large sheet full of Red, Green or Blue values. Assign a number to each and then compare as per the schema above to determine which one is on top. If all three colors are on the same fragment then I don't care which one is on top.
Image below illustrates what I mean (this is what I want it to be able to do):
(image from Wikipedia)
Is what I want here possible in OpenGL or am I going too much against the grain? If it is possible, how severe would the reduction in performance be?

What you want to do is not only possible, it's relatively fast and simple. I've implemented something like this in my game in order to render the penrose triangle.
The trick is to use a stencil buffer instead of the depth buffer. There are actually several different ways you can use the stencil buffer to achieve a circular ordering effect, depending on your exact needs. I'll describe one possible algorithm here:
Draw the red layer with no stencil test, while writing 1s to the stencil buffer.
Draw the green layer with stencil test not equal to 1, while writing 2s to the stencil buffer. This will prevent it from being drawn where red has already been drawn, effectively putting green "behind" red.
Draw the blue layer with stencil test not equal to 2. This will allow it to be drawn on top of red, but not green, effectively placing it between them.
The same technique generalizes to N layers like so:
- for each layer X going front-to-back, from `N - 1` down to `0`
- draw layer X with stencil test `not equal to X + 1`, while writing `X` to the stencil buffer
Code for this might look something like:
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);
for (int X = N - 1; X >= 0; --X) {
glStencilFunc(GL_NOTEQUAL, X + 1, 0xFF);
// drawLayer(X);
}
This algorithm only guarantees that layer X will correctly interact with layers (X - 1) % N and (X + 1) % N. If you want more sophisticated rules, you will need a more sophisticated algorithm.
Note that with an 8-bit stencil buffer, you will be limited to 255 layers.
Also note that because this algorithm doesn't use the depth buffer, it works for 3D scenes as well - just enable the depth test like normal, and you're off to the races. That's how I'm able to render the penrose triangle example in 3D.

Related

OpenGL trim/inline contour of stencil

I have created a shape in my stencil buffer (black in the picture below). Now I would like to render to the backbuffer. I would like one texture on the outer pixels (say 4 pixels) of my stencil (red), and an other texture on the remaining pixels (red).
I have read several solutions that involve scaling, but that will not work when there is no obvious center of the shape.
How do I acquire the desired effect?
The stencil buffer works great for doing operations on the specific fragments being overlaid onto them. However, it's not so great for doing operations that require looking at pixels other than the one corresponding to the fragment being rendered. In order to do outlining, you have to ask about the values of neighboring pixels, which stencil operations don't allow.
So, if it is possible to put the stencil data you want to test against in a non-stencil format image (ie: a color image, maybe with an integer texture format), that would make things much simpler. You can do the effect of stencil discarding by using discard directly in the fragment shader. Since you can fetch arbitrarily from the texture (as long as you're not trying to modify it), you can fetch neighboring pixels and test their values. You can use that to identify when a fragment is near a border.
However, if you're relying on specialized stencil operations to build the stencil data itself (like bitwise operations), then that's more complicated. You will have to employ stencil texturing operations, so you're going to have to render to an FBO texture that has a depth/stencil format. And you'll have to set it up to allow you to read from the stencil aspect of the texture. This is an OpenGL 4.3 feature.
This effectively converts it into an 8-bit unsigned integer texture. That allows you to play whatever games you need to. But if you want to use stencil tests to discard fragments, you will also need texture barrier functionality to allow you to read from an image that's attached to the current FBO. But you don't need to actually use the barrier, since you should mask off stencil writing. You just need GL 4.5 or the NV/ARB_texture_barrier extension to be available, which they widely are.
Either way this happens, the biggest difficulty is going to be varying the size of the border. It is easy to just test the neighboring 9 pixels to see if it is at a border. But the larger the border size, the larger the area of pixels each fragment has to test. At that point, I would suggest trying to look for a different solution, one that is based on some knowledge of what pattern is being written into the stencil buffer.
That is, if the rendering operation that lays down the stencil has some knowledge of the shape, then it could compute a distance to the edge of the shape in some way. This might require constructing the geometry in a way that it has distance information in it.

Using depth buffer for layering 2D sprites

I'm making a 2D game using OpenGL. I want to do the drawing like, first I copy vertex data of all objects I want to draw into VBOs (one VBO per texture/shader), then draw each VBO in separate draw call. It seemed like a good idea, until I realized it will mess up drawing order - the draw calls won't necessarily be in order the objects were loaded into VBOs. I thought of using a depth buffer to sort items - every new object to draw will have slightly higher Z position. The question is, how much should I increment it to not run into any problems? AFAIK, there can be two kinds of problems - if I make it too large, then I will have limited number of objects I can draw in a single frame, and if I make it too small, the precision loss of the depth buffer might make overlapping images be drawn in wrong order. To summarize:
1) What should be front and back values of my orthographic projection? 0 to 1? -1 to 1? 1 to 2? Does it matter?
2) If I use 's nextafter() for incrementing Z position, what kind of trouble can I run into? How does OpenGL and depth buffer react to subnormal floats? If I started with std::numeric_limits::min(), and ended at 1, is there anything else I should worry about?
First and foremost, you need to know the bit-depth of your depth buffer. Generally the depth buffer is fixed-point, either 16-, 24- or 32-bit.
Given a fixed-point depth buffer and the default depth range [0,1] you can make every integer value represent a uniquely distinguishable depth by using an orthographic projection matrix with 0.0 for nearVal and:
16-bit: farVal = 65535.0
24-bit: farVal = 16777215.0 // Most Common Configuration
32-bit: farVal = 4294967295.0
Then, you can assign your layered sprites up to farVal+1-many different depths (always use an integer value for sprite depth and begin with 0) and not worry about the depth buffer not being able to distinguish between the layers. In other words, the precision of your depth buffer will dictate the maximum number of layers you can have.

How to avoid glitches on superposed objects with openGL?

I would like, for example, to stack two cubes A and B.
The matter is that the top face of A is at the exact same position of B's bottom face.
This render some visual glitches as you can see :
Note that the pink grid can sometime be seen through any cube at some angle as well (not expected).
Is there any way to fix this without offsetting all my objects ?
This is called Depth Fighting or Z-Fighting and is caused, that after projection the depth values are subjected to rounding and when depth testing happens the winner of the depth test depends on the rounding of the depth values of the participating fragments.
Is there any way to fix this without offsetting all my objects ?
Yes, there are some techniques using the stencil buffer, with the caveat, that it works only for convex geometry. First you render your overlapping objects with depth testing and depth writes, but without color writes, setting a stencil mask. Next iteration you enable back face culling and draw with depth test disable, stencil test enabled (pass on the used stencil value) and color writes enabled. Within the region of the stencil mask things will overdraw according to the Painter's algorithm (i.e. the layers are in order as they're drawn).

Off-screen multiple render targets using Frame Buffer Object (FBO) or?

Situation: Generating N samples of a shape and corresponding edges (using Sobel filter or my own) with different transformations and rotations, while viewport (size=600*600) and camera remain constants. i.e. there will be N samples + N corresponding edges.
I am thinking to do like this,
Use One FBO with 2 renderbuffers [i.e. size of each buffer will be= (N *600) * 600]- 1st for N shapes and 2nd for edges of the corresponding shapes
Questions:
Which is the best way to achieve above things?
Though viewport size is 600*600pixels but shape will only occupy around 50*50pixels. So is there any efficient way to apply edge detection on bounding box/AABB region only on 2nd buffer? Also only reading 2N bounding box (N sample + N corresponding edges) in efficient way?
1 : I'm not sure what you call "best way". Use Multiple Render Targets : you create two 600*N textures, bind them both to the FBO with glDrawArrays, and in your fragment shader, so something like that :
layout(location = 0) out vec3 color;
layout(location = 1) out vec3 edges;
When writing to "color" and "edges", you'll effectively write in your textures.
2 : You shouldn't do this. Compute your bounding boxes on the CPU, and project them (i.e. multiply each corner by your ModelViewProjection matrix) to get the bounding boxes in 2D
By the way : Compute your bounding boxes first, so that you won't need 600*600 textures but 50*50...
EDIT : You usually restrict the drawn zone with glViewPort. But ther is only one viewport, and you need several. You can try the Viewport array extension and live on the bleeding edge, or pass the AABB in a texture, or don't worry about that until performance matters...
Oh, and you can't use Sobel just like that... Sobel requires that you can read all texels around, which is not the case since you're currently rendering said texels. Either make a two-pass algorithm without MRTs (first color, then edges) or don't use Sobel and guess you edges in the shader ( I don't really see how )
Like Calvin said, you have to first render your object into the the first framebuffer and then bind this as texture (use texture attachment rather than a renderbuffer) for the second pass to find the edges, as the edge detection usually needs access to a pixel's surrounding pixels.
Regarding your second question, you could probably use the stencil buffer. Just draw your shapes in the first pass and let them write a reference value into the stencil buffer. Then do the edge detection (usually by rendering a screen sized quad with the corrseponding fragment shader) and configure the stencil test to only pass where the stencil buffer contains the reference value. This way (assuming early-z hardware, which is quite common now) the fragment shader will only be executed on the pixels the shape has actually been drawn onto.

How does one use clip() to perform alpha testing?

This is an HLSL question, although I'm using XNA if you want to reference that framework in your answer.
In XNA 4.0 we no longer have access to DX9's AlphaTest functionality.
I want to:
Render a texture to the backbuffer, only drawing the opaque pixels of the texture.
Render a texture, whose texels are only drawn in places where no opaque pixels from step 1 were drawn.
How can I accomplish this? If I need to use clip() in HLSL, how to I check the stencilbuffer that was drawn to in step 1, from within my HLSL code?
So far I have done the following:
_sparkStencil = new DepthStencilState
{
StencilEnable = true,
StencilFunction = CompareFunction.GreaterEqual,
ReferenceStencil = 254,
DepthBufferEnable = true
};
DepthStencilState old = gd.DepthStencilState;
gd.DepthStencilState = _sparkStencil;
// Only opaque texels should be drawn.
DrawTexture1();
gd.DepthStencilState = old;
// Texels that were rendered from texture1 should
// prevent texels in texture 2 from appearing.
DrawTexture2();
Sounds like you want to only draw pixels that are within epsilon of full Alpha (1.0, 255) the first time, while not affecting pixels that are within epsilon of full Alpha the second.
I'm not a graphics expert and I'm operating on too little sleep, but you should be able to get there from here through an effect script file.
To write to the stencil buffer you must create a DepthStencilState that writes to the buffer, then draw any geometry that is to be drawn to the stencil buffer, then switch to a different DepthStencilState that uses the relevant CompareFunction.
If there is some limit on which alpha values are to be drawn to the stencil buffer, then use a shader in the first pass that calls the clip() intrinsic on floor(alpha - val) - 1 where val is a number in (0,1) that limits the alpha values drawn.
I have written a more detailed answer here:
Stencil testing in XNA 4