I've had zero luck getting depth buffering working. Everything renders, but only in the order I layer them. When I look at the depth stencil in the VS Graphics Debugger, it is NEVER written to. It should be, from everything I can think of, but it remains all red in the debugger (which just means it's still clear).
Both the rendertarget and the depthstencil are created for me by DXUT, and I've inspected them closely enough to be fairly certain they all look good. I've walked its creation code and had MSDN open and checked every flag - but thousands have used that code before, I suspect the DXUT end is fine.
My rendertarget is a Texture2D, R8G8B8A8_UNORM_SRGB, 1280x1024, Sample count 1, Quality 0, Mip levels 1, set to bind to render target, no flags.
My depthstencil is a Texture2D, D24_UNORM_S8_UINT, 1280x1024, Sample count 1, Quality 0, Mip levels 1, set to bind as depth stencil, no other flags.
At the beginning of my render, I clear the rendertargetview. Hard to mess that up. Next I clear the DepthStencil:
pd3dImmediateContext->ClearRenderTargetView(DXUTGetD3D11RenderTargetView(), Colors::Black);
pd3dImmediateContext->ClearDepthStencilView(DXUTGetD3D11DepthStencilView(), D3D11_CLEAR_DEPTH, 1.0f, 0);
My depth stencil state for ALL draws is:
DepthEnable TRUE
DepthFunc LESS_EQUAL
DepthWriteMask ALL
StencilEnable FALSE
The DepthStencilView shows up as
Texture2D, D24_UNORM_S8_UINT, no flags, MipSlice 0
in the VS Graphics debugger. I have blending set to solid.
It does not look like a mismatch (for example, multisampling) on the rendertarget texture vs the depthstencil texture, as I've confirmed (unless those two DXGI formats are for some reason incompatible).
Everything looks right other than depth, so I'm going to presume my transformation matrices are all fine.
The only other hint I have at this point is that if I clear the depth buffer with 0.999999, everything disappears. With 1.0, everything draws (though with incorrect/absent depth).
The viewport is set to window size, then 0.0 as min depth and 1.0 as max. That's the first thing I checked.
My rasterizer state is:
FillMode SOLID
CullMode BACK
FrontCounterClockwise FALSE
DepthBias 0
DepthBiasClamp 0.000f
SlopeScaleDepthBias 0.000f
DepthClipEnable TRUE
ScissorEnable FALSE
MultisampleEnable TRUE
AntialiasedLineEnable FALSE
ForcedSampleCount 0
If there's nothing blatantly wrong above, what's the next logical thing to check? Given I'm using the DXUT framework, I'm not doing a lot of the creation plumbing on my own and it's always worked before! And in the debugger, it all looks great.
The only thing I can't check are the outputs of the vertex shader because the graphics debugger explodes and crashes.
Related
Quick question - title says it all:
In my OpenGL-code (3.3), I'm using the line
glEnable(GL_ALPHA_TEST);
I've been using my code for weeks now and never checked for errors (via glGetError()) because it works perfectly. Now that I did (because something else isn't working), this line gives me an invalid enum error. Google revealed that glEnable(GL_ALPHA_TEST) seems to be depreciated since OpenGL 3 (core profile?) or so and I guess, that is the reason for the error.
But that part of the code still does exactly what I want. Some more code:
glDisable(GL_CULL_FACE);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_ALPHA_TEST);
// buffer-stuff
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
glDrawArraysInstanced(GL_TRIANGLE_STRIP, 0, 9, NumParticles);
So, did I put something redundant in there? I'm drawing particles (instanced) on screen using 2 triangles each (to give a quad) and in the alpha-chanel of the particle-color, I'm basically setting a circle (so 1.0f if in the circle, otherwise 0.0f). Depth-testing of course for not drawing particles from the back infront of particles further in front and glBlendFunc() (and as I understood glEnabled(GL_ALPHA_TEST)) for removing the bits not in the circle. I'm still learning OpenGL and am trying to understand, why that code actually works (for once) and why I apparently don't need glEnable(GL_ALPHA_TEST)...
Yes, I'm using discard in the fragment shader. Otherwise, I just used to code above, so I guess, only one depth value (standard?).
discard is the replacement for glEnable(GL_ALPHA_TEST);.
So, did I put something redundant in there?
Yes discard and glEnable(GL_ALPHA_TEST); would be redundant if you use a profile for which glEnable(GL_ALPHA_TEST); still exists and if you use discard for every fragment with an alpha for which the glAlphaFunc would discard that fragment.
Since you are in a profile for which the glEnable(GL_ALPHA_TEST); does not exist anymore, the glEnable(GL_ALPHA_TEST); has no effect in your code and can be removed.
Alpha test is a (since ages deprecated) method to only draw fragments when they match some alpha function. Nowadays this can easily be done inside a shader by just discarding the fragments. Alpha testing in itself is also very limited, because it can only decide to draw a fragment or not.
In general, enabling GL_ALPHA_TEST without setting a proper glAlphaFunc will do nothing since the default comparison function is GL_ALWAYS which means that all fragments will pass the test.
Your code doesn't seem to rely on alpha testing, but on blending (I assume that since you are setting the glBlendFunc). Somewhere in your code there's probably also a glEnable(GL_BLEND).
I've enabled face culling with glEnable(GL_CULL_FACE), and I'm trying to cull the back faces, but whenever I do glCullFace(GL_BACK) nothing gets rendered.
If I do glCullFace(GL_FRONT) it works as expected (that is, renders the inside of my cubes, but not the outside).
I've tried to change the winding, but it doesn't seem to be that since GL_FRONT works.
What could be the reason for this?
It is rendered to a framebuffer with a depth renderbuffer enabled, if that matters. Disabling culling makes everything render as expected.
Edit
The winding used is counter-clockwise, i.e. the nearest side:
x, y, z
0, 0, 0
1, 0, 0
1, 1, 0
0, 0, 0
1, 1, 0
0, 1, 0
Here is an image of what it looks like with GL_FRONT:
(without the back of the cubes, so you can see the effect). Again, this is what I expected it to look like.
And what it looks like without culling:
I would like to share my experience since I had the same problem:
I was able to render something with glCullFace(GL_FRONT) and get the clear color only with glCullFace(GL_BACK). Turns out that OpenGL was working perfectly fine (of course) and the problem was from the shading technique I used, Deferred Shading.
I was using a Quad in normalized device coordinates to show the result of the lighting calculations and this quad was in clockwise order! So, swaping the order of this quad, everything worked.
And this expands to everyone that projects something on the screen using a quad! Either disable culling before drawing it and then enable it again (don't recommend due to API calls) or simply make sure that the quad is defined in CCW.
I'm trying to get early fragment culling to work, based on the stencil test.
My scenario is the following: I have a fragment shader that does a lot of work, but needs to be run only on very few fragments when I render my scene. These fragments can be located pretty much anywhere on the screen (I can't use a scissor to quickly filter out these fragments).
In rendering pass 1, I generate a stencil buffer with two possible values. Values will have the following meaning for pass 2:
0: do not do anything
1: ok to proceed, (eg. enter the fragment shader, and render)
Pass 2 renders the scene properly speaking. The stencil buffer is configured this way:
glStencilMask(1);
glStencilFunc(GL_EQUAL, 1, 1); // if the value is NOT 1, please early cull!
glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP); // never write to stencil buffer
Now I run my app. The color of selected pixels is altered based on the stencil value, which means the stencil test works fine.
However, I should see a huge, spectacular performance boost with early stencil culling... but nothing happens. My guess is that the stencil test either happens after the depth test, or even after the fragment shader has been called. Why?
nVidia apparently has a patent on early stencil culling:
http://www.freepatentsonline.com/7184040.html
Is this the right away for having it enabled?
I'm using an nVidia GeForce GTS 450 graphics card.
Is early stencil culling supposed to work with this card?
Running Windows 7 with latest drivers.
Like early Z, early stencil is often done using hierarchical stencil buffering.
There are a number of factors that can prevent hierarchical tiling from working properly, including rendering into an FBO on older hardware. However, the biggest obstacle to getting early stencil testing working in your example is that you've left stencil writes enabled for 1/(8) bits in the second pass.
I would suggest using glStencilMask (0x00) at the beginning of the second pass to let the GPU know you are not going to write anything to the stencil buffer.
There is an interesting read on early fragment testing as it is implemented in current generation hardware here. That entire blog is well worth reading if you have the time.
I'm not sure why this is happening, I'm only rendering a few simple primitive QUADS.
The red is meant to be in front of the yellow.
The yellow always goes in-front of the red, even when it's behind it.
Is this a bug or simply me seeing the cube wrongly?
Turn the depth buffer and depth test on, or OpenGL would draw what is latter on the top.
Your application needs to do at least the following to get depth buffering to work:
Ask for a depth buffer when you create your window.
Place a call to glEnable (GL_DEPTH_TEST) in your program's initialization routine, after a context is created and made current.
Ensure that your zNear and zFar clipping planes are set correctly and in a way that provides adequate depth buffer precision.
Pass GL_DEPTH_BUFFER_BIT as a parameter to glClear, typically bitwise OR'd with other values such as GL_COLOR_BUFFER_BIT.
See here http://www.opengl.org/resources/faq/technical/depthbuffer.htm
I had the same problem but it was unrelated to the depth buffer, although I did see some change for the better when I enabled that. It had to do with the blend functions used which combined pixel intensities at the last step of rendering. So I had to turn off glBlendFunc()
This is an HLSL question, although I'm using XNA if you want to reference that framework in your answer.
In XNA 4.0 we no longer have access to DX9's AlphaTest functionality.
I want to:
Render a texture to the backbuffer, only drawing the opaque pixels of the texture.
Render a texture, whose texels are only drawn in places where no opaque pixels from step 1 were drawn.
How can I accomplish this? If I need to use clip() in HLSL, how to I check the stencilbuffer that was drawn to in step 1, from within my HLSL code?
So far I have done the following:
_sparkStencil = new DepthStencilState
{
StencilEnable = true,
StencilFunction = CompareFunction.GreaterEqual,
ReferenceStencil = 254,
DepthBufferEnable = true
};
DepthStencilState old = gd.DepthStencilState;
gd.DepthStencilState = _sparkStencil;
// Only opaque texels should be drawn.
DrawTexture1();
gd.DepthStencilState = old;
// Texels that were rendered from texture1 should
// prevent texels in texture 2 from appearing.
DrawTexture2();
Sounds like you want to only draw pixels that are within epsilon of full Alpha (1.0, 255) the first time, while not affecting pixels that are within epsilon of full Alpha the second.
I'm not a graphics expert and I'm operating on too little sleep, but you should be able to get there from here through an effect script file.
To write to the stencil buffer you must create a DepthStencilState that writes to the buffer, then draw any geometry that is to be drawn to the stencil buffer, then switch to a different DepthStencilState that uses the relevant CompareFunction.
If there is some limit on which alpha values are to be drawn to the stencil buffer, then use a shader in the first pass that calls the clip() intrinsic on floor(alpha - val) - 1 where val is a number in (0,1) that limits the alpha values drawn.
I have written a more detailed answer here:
Stencil testing in XNA 4