Stencil buffer test settings DirectX9 - hlsl

I have a function that renders a triangle of desired color. I am trying to render a yellow triangle and then a red triangle over it with stencil test enable. I am using a circle as my stencil window. What should be my stencil test equations and operations to get the below output. All rendering in DirectX09 only.
Desired output
Kindly guide to few good and simple examples for below APIs....
SetRenderState(D3DRS_STENCILFUNC,
SetRenderState(D3DRS_STENCILREF,
SetRenderState(D3DRS_STENCILMASK,
SetRenderState(D3DRS_STENCILWRITEMASK,
SetRenderState(D3DRS_STENCILZFAIL,
SetRenderState(D3DRS_STENCILFAIL,
SetRenderState(D3DRS_STENCILPASS,
How do we use Stencil operation in DirectX09 shaders effect file (vs_3_0 and ps_3_0) ?

The documentation of the renderstates should answer most of your related questions.
For creating the stencilmask, you need the methods
SetRenderState(D3DRS_STENCILZFAIL,D3DSTENCILOP_KEEP)
SetRenderState(D3DRS_STENCILFAIL,D3DSTENCILOP_INCRSAT)
SetRenderState(D3DRS_STENCILPASS,D3DSTENCILOP_INCRSAT)
SetRenderState(D3DRS_STENCILFUNC,D3DCMP_ALWAYS)
because they increment the stencilbuffer, while rendering your circle. Then youre drawing your yellow triangle without using the stencilbuffer. After that youre drawing the red triangle with
SetRenderState(D3DRS_STENCILZFAIL,D3DSTENCILOP_KEEP)
SetRenderState(D3DRS_STENCILFAIL,D3DSTENCILOP_KEEP)
SetRenderState(D3DRS_STENCILPASS,D3DSTENCILOP_KEEP)
SetRenderState(D3DRS_STENCILFUNC,D3DCMP_LESS)
SetRenderState(D3DRS_STENCILREF,0)
so your that your stenciltest returns only true, where youre circle had been drawn before (there should be the stencilvalue greater 0). If after that there is nothing drawn properly, you should try to deactivate the Z-Test maybe, the order of your triangles isn't right.
How do we use Stencil operation in DirectX09 shaders effect file (vs_3_0 and ps_3_0) ?
Stencil operations are only used from your main program code. Shaders cannot have any effect on the stenciltests.

Related

Fragments with different depth are blended

I am wondering about the rendering behavior. I am rendering text mesh "orange" and rectangle mesh "blue". I am using opengl pipeline for rendering.
The depth function is set to GL_LESS and both meshes are such, that all fragments of Rectangle has slightly larger values then text fragments (due to resolution and tolerance errors).
I assumed that all text fragments should be discarded (rectangle depth values), but final image contains a blend of rectangle and text (blending is also disabled):
B.t.w. renderdoc in Texture view shows a rectangle without text overlay as expected, but in preview, text is seen. This is also something I would like to understand. Do I miss some pipeline postprocessing stages that are in action?
Blending may not work as you expect when the Depth Test is enabled. The depth test discards the fragments before they can be blended. If depth test and blending are enabled, a fragment that is not discarded during the depth test will be blended. So it depends on the order in which the objects are drawn.
To achieve correct blending of objects with different depths, draw all objects in order from back to front (depth sorting). Since the objects are drawn from back to front, the depth test is not needed at all.

Get pixel behind the current pixel

I'm coding a programm in C++ with glut, rendering a 3D model in a window.
I'm using glReadPixels to get the image of the scenery displayed in the windows.
And I would like to know how I can get, for a specific pixel (x, y), not directly its color but the color of the next object behind.
If I render a blue triangle, and a red triangle in front of it, glReadPixels gives me red colors from the red triangle.
I would like to know how I can get the colors from the blue triangle, the one I would get from glReadPixels if the red triangle wasn't here.
The default framebuffer only retains the topmost color. To get what you're suggesting would require a specific rendering pipeline.
For instance you could:
Create an offscreen framebuffer of the same dimensions as your target viewport
Render a depth-only pass to the offscreen framebuffer, storing the depth values in an attached texture
Re-render the scene with a special shader that only drew pixels where the post-transformation Z values was LESS than the value in the previously recorded depth buffer
The final result of the last render should be the original scene with the top layer stripped off.
Edit:
It would require only a small amount of new code to create the offscreen framebuffer and render a depth only version of the scene to it, and you could use your existing rendering pipeline in combination with that to execute steps 1 and 2.
However, I can't think of any way you could then re-render the scene to get the information you want in step 3 without a shader, because it both the standard depth test plus a test against the provided depth texture. That doesn't mean there isn't one, just that I'm not well versed in GL tricks to think of it.
I can think of other ways of trying to accomplish the same task for specific points on the screen by fiddling with the rendering system, but they're all far more convoluted than just writing a shader.

Drawing to FrameBuffer ignores depth test

I'm drawing my scene to a texture using FBO and reading the pixel in order to select my object.
The problem is that the drawing into the texture is ignoring the depth
On Left is scene and right is the texture (saved it to file for debug reasons).
As u can see there are 2 planes one on top of another and the one in the front is more directed up. although on the texture it's the other way around. and this makes the user pick the plane in the background when he sees the other plane.
I'v tried to enable everything i thought of but i guess i'm missing something.
Thanks to ratchet freak,
I've realized that i skipped the depth buffer :)
I just had to create a render buffer for the depth component and attach it to the frame buffer object.

Drawing a Circle on a plane, Boolean Subtraction - OpenGL

I'm hoping to draw a plane in OpenGL, using C++, with a hole in the center, much like the green of a golf course for example.
I was wondering what the easiest way to achieve this is?
It's fairly simple to draw a circle and a plane (tutorials all over google will show this for those curious), but I was wondering if there is a boolean subtraction technique like you can get when modelling in 3Ds Max or similar software? Where you create both objects, then take the intersection/union etc to leave a new object/shape? In this case subtract the circle from the plane, creating a hole.
Another way I thought of doing it is giving the circle alpha values and making it transparent, but then of course it still leaves the planes surface visible anyway.
Any help or points in the right direction?
I would avoid messing around with transparency, blending mode, and the like. Just create a mesh with the shape you need and draw it. Remember OpenGL is for graphics, not modelling.
There are a couple ways you could do this. The first way is the one you already stated which is to draw the circle as transparent. The caveat is that you must draw the circle first before you draw the plane so that the alpha blending will blend the circle with the background. Then when you render the plane the parts that are covered by the circle will be discarded in the depth test.
The second method you could try is with texture mapping. You could create a texture that is basically a mask with everything set to opaque white except the circle portion which is set to have an alpha value of 0. In your shader you would then multiply your fragment color by this mask texture color so that the portions where the circle is located are now transparent.
Both of these methods would work with shapes other than a circle as well.
I suggest the stencil buffer. Use the stencil buffer to mark the area where you want the hole to be by masking the color and depth buffers and drawing only to the stencil buffer, then unmask your color and depth, avoid drawing to the stencil buffer, and draw your plane with a stencil function telling OpenGL to discard all pixels where the stencil buffer "markings" are.

Rendering 3D Models With Textures That Have Alpha In OpenGL

So Im trying to figure out the best way to render a 3D model in OpenGL when some of the textures applied to it have alpha channels.
When I have the depth buffer enabled, and start drawing all the triangles in a 3D model, if it draws a triangle that is in front of another triangle in the model, it will simply not render the back triangle when it gets to it. The problem is when the front triangle has alpha transparency, and should be able to be seen through to the triangle behind it, but the triangle behind is still not rendered.
Disabling the depth buffer eliminates that problem, but creates the obvious issue that if the triangle IS opaque, then it will still render triangles behind it on top if rendered after.
For example, I am trying to render a pine tree that is basically some cones stacked on top of each other that have a transparent base. The following picture shows the problem that arises when the depth buffer is enabled:
You can see how you can still see the outline of the transparent triangles.
The next picture shows what it looks like when the depth buffer is disabled.
Here you can see how some of the triangles on the back of the tree are being rendered in front of the rest of the tree.
Any ideas how to address this issue, and render the pine tree properly?
P.S. I am using shaders to render everything.
If you're not using any partial transparency (everything is either 0 or 255), you can glEnable(GL_ALPHA_TEST) and that should help you. The problem is that if you render the top cone first, it deposits the whole quad into the z-buffer (even the transparent parts), so the lower branches underneath get z-rejected when its their time to be drawn. Enabling alpha testing doesn't write pixels to the z buffer if they fail the alpha test (set with glAlphaFunc).
If you want to use partial transparency, you'll need to sort the order of rendering objects from back to front, or bottom to top in your case.
You'll need to leave z-buffer enabled as well.
[edit] Whoops I realized that those functions I don't believe work when you're using shaders. In the shader case you want to use the discard function in the fragment shader if the alpha value is close to zero.
if(color.a < 0.01) {
discard;
} else {
outcolor = color;
}
You needs to implement a two-pass algorithm.
The first pass render only the back faces, while the second pass render only the front faces.
In this way you don't need to order the triangles, but some artifacts may occour depending whether your geometry is convex or not.
I may be wrong, but this is because when you render in 3d you do no render the backside of triangles using Directx's default settings, when the Z is removed - it draws them in order, with the Z on it doesnt draw the back side of the triangles anymore.
It is possible to show both sides of the triangle, even with Z enabled, however I'm thinking there might be a reason its normally enabled.. such as speed..
Device->SetRenderState(D3DRS_CULLMODE, Value);
value can equal
D3DCULL_NONE - Shows both sides of triangle
D3DCULL_CW - Culls Front side of triangle
D3DCULL_CCW - Default state