Direct3D 11 depth stencil / alpha blending issue - c++

I've been working on a 3D renderer for a game, and until now it rendered all the textureless meshes first and all the textured meshes afterwards, using DrawIndexed. In an effort to improve performance, I've switched to DrawIndexedInstanced and made it so that textured meshes are rendered first, and this has revealed an issue with how my alpha blending and/or depth checking is set up. The following images should illustrate what the problem is:
View through the top of the front-most textures (textured meshes rendered first)
The same view, slightly different angle (textureless meshes rendered first)
In the foreground and the background are rows of textured rectangle meshes, and the ones in the foreground have partly transparent meshes. In the middle row are untextured meshes with their transparency set to 0.3f. When textured meshes are rendered first, the untextured ones are obscured by the transparent meshes in the foreground. However, when it's the untextured meshes that are rendered first, they obscure the textured meshes behind them completely, even when their transparency is at 0.3f. This does not happen when untextured meshes obscure other untextured meshes, alpha blending works correctly in that scenario.
This is where I set up the rasterizer state, depth stencil state and depth stencil view:
ID3D11Texture2D *pBackBuffer;
D3D11_TEXTURE2D_DESC backBufferDesc;
m_swapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), (LPVOID*)&pBackBuffer);
pBackBuffer->GetDesc(&backBufferDesc);
RELEASE_RESOURCE(pBackBuffer);
// creating a buffer for the depth stencil
D3D11_TEXTURE2D_DESC depthStencilBufferDesc;
ZeroMemory(&depthStencilBufferDesc, sizeof(D3D11_TEXTURE2D_DESC));
depthStencilBufferDesc.ArraySize = 1;
depthStencilBufferDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL;
depthStencilBufferDesc.CPUAccessFlags = 0; // No CPU access required.
depthStencilBufferDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT;
depthStencilBufferDesc.Width = backBufferDesc.Width;
depthStencilBufferDesc.Height = backBufferDesc.Height;
depthStencilBufferDesc.MipLevels = 1;
depthStencilBufferDesc.SampleDesc.Count = 4;
depthStencilBufferDesc.SampleDesc.Quality = 0;
depthStencilBufferDesc.Usage = D3D11_USAGE_DEFAULT;
m_device->CreateTexture2D(&depthStencilBufferDesc, NULL, &m_depthStencilBuffer);
// creating a depth stencil view
HRESULT hr = m_device->CreateDepthStencilView( m_depthStencilBuffer,
NULL,
&m_depthStencilView);
// setup depth stencil state.
D3D11_DEPTH_STENCIL_DESC depthStencilStateDesc;
ZeroMemory(&depthStencilStateDesc, sizeof(D3D11_DEPTH_STENCIL_DESC));
depthStencilStateDesc.DepthEnable = TRUE;
depthStencilStateDesc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL;
depthStencilStateDesc.DepthFunc = D3D11_COMPARISON_LESS;
depthStencilStateDesc.StencilEnable = FALSE;
hr = m_device->CreateDepthStencilState(&depthStencilStateDesc, &m_depthStencilState);
// setup rasterizer state.
D3D11_RASTERIZER_DESC rasterizerDesc;
ZeroMemory(&rasterizerDesc, sizeof(D3D11_RASTERIZER_DESC));
rasterizerDesc.AntialiasedLineEnable = FALSE;
rasterizerDesc.CullMode = D3D11_CULL_BACK;
rasterizerDesc.DepthBias = 0;
rasterizerDesc.DepthBiasClamp = 0.0f;
rasterizerDesc.DepthClipEnable = TRUE;
rasterizerDesc.FillMode = D3D11_FILL_SOLID;
rasterizerDesc.FrontCounterClockwise = FALSE;
rasterizerDesc.MultisampleEnable = FALSE;
rasterizerDesc.ScissorEnable = FALSE;
rasterizerDesc.SlopeScaledDepthBias = 0.0f;
// create the rasterizer state
hr = m_device->CreateRasterizerState(&rasterizerDesc, &m_RasterizerState);
m_deviceContext->OMSetRenderTargets(1, &m_renderTargetView, m_depthStencilView);
m_deviceContext->OMSetDepthStencilState(m_depthStencilState, 1);
m_deviceContext->RSSetState(m_RasterizerState);
And this is where I enable alpha blending:
D3D11_BLEND_DESC blendDescription;
ZeroMemory(&blendDescription, sizeof(D3D11_BLEND_DESC));
blendDescription.RenderTarget[0].BlendEnable = TRUE;
blendDescription.RenderTarget[0].SrcBlend = D3D11_BLEND_SRC_ALPHA;
blendDescription.RenderTarget[0].DestBlend = D3D11_BLEND_INV_SRC_ALPHA;
blendDescription.RenderTarget[0].BlendOp = D3D11_BLEND_OP_ADD;
blendDescription.RenderTarget[0].SrcBlendAlpha = D3D11_BLEND_ONE;
blendDescription.RenderTarget[0].DestBlendAlpha = D3D11_BLEND_ZERO;
blendDescription.RenderTarget[0].BlendOpAlpha = D3D11_BLEND_OP_ADD;
blendDescription.RenderTarget[0].RenderTargetWriteMask = D3D11_COLOR_WRITE_ENABLE_ALL;
m_device->CreateBlendState(&blendDescription, &m_blendState);
m_deviceContext->OMSetBlendState(m_blendState, 0, 0xffffffff);
I know that giving the textureless mesh a plain white fully opaque texture would fix the problem in a way, but I suspect the depth testing is at fault.
When I create the device with the D3D11_CREATE_DEVICE_DEBUG flag, it doesn't give me any errors or warnings.
All the HRESULTs returned by the Create functions are S_OK.
Thanks in advance.

For blending to work, you must render all fully opaque objects first and then all objects with transparency in a back to front order. This means that your transparent objects are sorted based on the distance from the camera with farther objects first.
Ideally your opaque objects are sorted in the opposite direction (front to back) so that pixels that are obscured are discarded by the depth test.
This is typically done by placing all draw requests into a queue. Once everything in the scene is in the queue, you can sort it based on various factors including transparency, distance, material, etc. Then you can loop through the queue and make all of your draw requests in the proper order.
For simple cases though, just make sure your opaque objects are drawn first and your transparent objects are drawn next in a general back to front order.

Related

Drawing is not showing when GDI compatible DC used from IDXGISurface1

I have created a texture which is GDI compatible but the DC I have got from it is used to draw lines from on point to another point which are not showing on the view window. Also no exception is thrown. Am I missing anything? Is there anyone done the same and successfully draw 2D shapes or something using GDI compatible DC ? Please help.
// get texture surface1 and overlay DC from GDI compatible texture 2D
D3D11_TEXTURE2D_DESC desc;
ZeroMemory(&desc, sizeof(desc));
desc.Width = width;
desc.Height = height;
desc.ArraySize = 1;
desc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
desc.Usage = D3D11_USAGE_DEFAULT;
desc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET;
desc.CPUAccessFlags = 0;
desc.MipLevels = 1;
desc.SampleDesc.Count = 1;
desc.MiscFlags = D3D11_RESOURCE_MISC_GDI_COMPATIBLE;
ID3D11Texture2DPtr texture2D;
IF_FAILED_THROW_HR(renderer->Device()->CreateTexture2D(&desc, nullptr, &texture2D));
// Create the shader resource view.
ID3D11ShaderResourceViewPtr shaderResourceView;
IF_FAILED_THROW_HR(device->CreateShaderResourceView(texture2D, nullptr, &shaderResourceView));
ID3D11ResourcePtr resource;
view->GetResource(&resource);
m_texture2D = resource;
m_dxgiSurface1 = m_texture2D;
TRY_CONDITION(m_dxgiSurface1);
HDC hdc{};
IF_FAILED_THROW_HR(m_dxgiSurface1->GetDC(FALSE, &hdc));
DXGI_SURFACE_DESC descOverlay = {0};
m_dxgiSurface1->GetDesc(&descOverlay);
// Draw on the DC using GDI
// fill the texture with the color key
::SetBkColor(overlayDC, m_keyColor);
const auto overlayRect = CRect{ 0, 0, gsl::narrow_cast<int>(descOverlay.Width), gsl::narrow_cast<int>(descOverlay.Height) };
::ExtTextOut(overlayDC, 0, 0, ETO_OPAQUE, overlayRect, nullptr, 0, nullptr);
m_dxgiSurface1->ReleaseDC(nullptr);
Update:
I have edited the above source code where I have created the shader resource view from the GDI compatible texture then took the texture back from resource to the surface1. Then surface1 provides a DC which is used for GDI Drawing. Now smooth rendering but no GDI drawing is visible.
The texture created after this GDI drawing is used for mixing with other textures. I was finding these drawings over those textures but later found my mistakes that this GDI drawing texture was not mixed with other textures in shader programs thus it was not rendered as an overlay. So it looked like the drawing was missing.

Wrong depth buffer (to texture) output?

For the SSAO effect I have to generate two textures: normals (in view space) and depth.
I decided to use depth buffer as texture according to Microsoft tutorial (the Reading the Depth-Stencil Buffer as a Texture chapter).
Unfortunately, after rendering I got none information from the depth buffer (the lower image):
I guess it's not right. And what is strange, the depth buffer seems to work (I get the right order of faces etc.).
The depth buffer code:
//create depth stencil texture (depth buffer)
D3D11_TEXTURE2D_DESC descDepth;
ZeroMemory(&descDepth, sizeof(descDepth));
descDepth.Width = width;
descDepth.Height = height;
descDepth.MipLevels = 1;
descDepth.ArraySize = 1;
descDepth.Format = DXGI_FORMAT_R24G8_TYPELESS;
descDepth.SampleDesc.Count = antiAliasing.getCount();
descDepth.SampleDesc.Quality = antiAliasing.getQuality();
descDepth.Usage = D3D11_USAGE_DEFAULT;
descDepth.BindFlags = D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE;
descDepth.CPUAccessFlags = 0;
descDepth.MiscFlags = 0;
ID3D11Texture2D* depthStencil = NULL;
result = device->CreateTexture2D(&descDepth, NULL, &depthStencil);
ERROR_HANDLE(SUCCEEDED(result), L"Could not create depth stencil texture.", MOD_GRAPHIC);
D3D11_SHADER_RESOURCE_VIEW_DESC shaderResourceViewDesc;
//setup the description of the shader resource view
shaderResourceViewDesc.Format = DXGI_FORMAT_R24_UNORM_X8_TYPELESS;
shaderResourceViewDesc.ViewDimension = antiAliasing.isOn() ? D3D11_SRV_DIMENSION_TEXTURE2DMS : D3D11_SRV_DIMENSION_TEXTURE2D;
shaderResourceViewDesc.Texture2D.MostDetailedMip = 0;
shaderResourceViewDesc.Texture2D.MipLevels = 1;
//create the shader resource view.
ERROR_HANDLE(SUCCEEDED(device->CreateShaderResourceView(depthStencil, &shaderResourceViewDesc, &depthStencilShaderResourceView)),
L"Could not create shader resource view for depth buffer.", MOD_GRAPHIC);
createDepthStencilStates();
//set the depth stencil state.
context->OMSetDepthStencilState(depthStencilState3D, 1);
D3D11_DEPTH_STENCIL_VIEW_DESC depthStencilViewDesc;
// Initialize the depth stencil view.
ZeroMemory(&depthStencilViewDesc, sizeof(depthStencilViewDesc));
// Set up the depth stencil view description.
depthStencilViewDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT;
depthStencilViewDesc.ViewDimension = antiAliasing.isOn() ? D3D11_DSV_DIMENSION_TEXTURE2DMS : D3D11_DSV_DIMENSION_TEXTURE2D;
depthStencilViewDesc.Texture2D.MipSlice = 0;
//depthStencilViewDesc.Flags = D3D11_DSV_READ_ONLY_DEPTH;
// Create the depth stencil view.
result = device->CreateDepthStencilView(depthStencil, &depthStencilViewDesc, &depthStencilView);
ERROR_HANDLE(SUCCEEDED(result), L"Could not create depth stencil view.", MOD_GRAPHIC);
After rendering with first pass, I set the depth stencil as texture resource along with other render targets (color, normals), appending it to array:
ID3D11ShaderResourceView ** textures = new ID3D11ShaderResourceView *[targets.size()+1];
for (unsigned i = 0; i < targets.size(); i++) {
textures[i] = targets[i]->getShaderResourceView();
}
textures[targets.size()] = depthStencilShaderResourceView;
context->PSSetShaderResources(0, targets.size()+1, textures);
Before second pass I call context->OMSetRenderTargets(1, &myRenderTargetView, NULL); to unbind depth buffer (so I can use it as texture).
Then, I render my textures (render targets from first pass + depth buffer) with trivial post-process shader, just for debug purpose (second pass):
Texture2D ColorTexture[3];
SamplerState ObjSamplerState;
float4 main(VS_OUTPUT input) : SV_TARGET0{
float4 Color;
Color = float4(0, 1, 1, 1);
float2 textureCoordinates = input.textureCoordinates.xy * 2;
if (input.textureCoordinates.x < 0.5f && input.textureCoordinates.y < 0.5f) {
Color = ColorTexture[0].Sample(ObjSamplerState, textureCoordinates);
}
if (input.textureCoordinates.x > 0.5f && input.textureCoordinates.y < 0.5f) {
textureCoordinates.x -= 0.5f;
Color = ColorTexture[1].Sample(ObjSamplerState, textureCoordinates);
}
if (input.textureCoordinates.x < 0.5f && input.textureCoordinates.y > 0.5f) { //depth texture
textureCoordinates.y -= 0.5f;
Color = ColorTexture[2].Sample(ObjSamplerState, textureCoordinates);
}
...
It works fine for normals texture. Why it doesn't for depth buffer (as shader resource view)?
As per comments:
The texture was rendered and sampled correctly but the data appeared to be uniformly red due to the data lying between 0.999 and 1.0f.
There are a few things you can do to improve the available depth precision, but the simplest of which is to simply ensure your near and far clip distances are not excessively small/large for the scene you're drawing.
Assuming metres are your unit, a near clip of 0.1 (10cm) and a far clip of 200 (metres) are much more reasonable than 1cm and 20km.
Even so, don't expect to see too many black/dark areas, the non linear nature of a z-buffer is still going to mean most of your depth values are shunted up towards 1. If visualisation of the depth buffer is important then simply rescale the data to the normalised 0-1 range before displaying it.

HLSL - Sampling a render target texture always return black color

Okay, first of all, I'm really new to DirectX11 and this is actually my first project using it. I'm also relatively new to Computer Graphics in general so I might have some concepts wrong although, for this particular case, I do not think so. My code is based on the RasterTek tutorials.
In trying to implement a shader shader, I need to render the scene to a 2D texture and then perform a gaussian blur on the resulting image.
That part seems to be working fine as when using the Visual Studio graphics debugger the output seems to be what I expect.
However, after having having done all post processing, I render a quad to the backbuffer using a simple shader that uses the final output of the blur as a resource. This always gives me a black screen. When I debug my pixel shader with the VS graphics debugger, it seem like the Sample(texture, uv) method always returns (0,0,0,1) when trying to sample that texture.
The pixel shader works fine if I use a different texture, like some normal map or whatever, as a resource, just not when using any of the rendertargets from the previous passes.
The behaviour is particularly weird because the actual blur shader works fine when using any of the rendertargets as a resource.
I know I cannot use a rendertarget as both input and output but I think I have that covered since I call OMSetRenderTargets so I can render to the backbuffer.
Here's the step by step of my implementation:
Set Render Targets
Clear them
Clear Depth buffer
Render scene to texture
Turn off Z buffer
Render to quad
Perform horizontal blur
Perform vertical blur
Set back buffer as render target
Clear back buffer
Render final output to quad
Turn z buffer on
Present back buffer
Here is the shader for the quad:
Texture2D shaderTexture : register(t0);
SamplerState SampleType : register(s0);
struct PixelInputType
{
float4 position : SV_POSITION;
float2 tex : TEXCOORD0;
};
float4 main(PixelInputType input) : SV_TARGET
{
return shaderTexture.Sample(SampleType, input.tex);
}
Here's the relevant c++ code
This is how I set the render targets
void DeferredBuffers::SetRenderTargets(ID3D11DeviceContext* deviceContext, bool activeRTs[BUFFER_COUNT]){
vector<ID3D11RenderTargetView*> rts = vector<ID3D11RenderTargetView*>();
for (int i = 0; i < BUFFER_COUNT; ++i){
if (activeRTs[i]){
rts.push_back(m_renderTargetViewArray[i]);
}
}
deviceContext->OMSetRenderTargets(rts.size(), &rts[0], m_depthStencilView);
// Set the viewport.
deviceContext->RSSetViewports(1, &m_viewport);
}
I use a ping pong approach with the Render Targets for the blur.
I render the scene to a MainTarget and depth information to the depthMap. The first pass performs an horizontal blur onto a third target (horizontalBlurred) and then I use that one as input for the vertical blur which renders back to the mainTarget and to the finalTarget. It's a loop because on the vertical pass I'm supposed to blend the PS output with what's on the finalTarget. I left that code (and some other stuff) out as it's not relevant.
The m_Fullscreen is the quad.
bool activeRenderTargets[4] = { true, true, false, false };
// Set the render buffers to be the render target.
m_ShaderManager->getDeferredBuffers()->SetRenderTargets(m_D3D->GetDeviceContext(), activeRenderTargets);
// Clear the render buffers.
m_ShaderManager->getDeferredBuffers()->ClearRenderTargets(m_D3D->GetDeviceContext(), 0.25f, 0.0f, 0.0f, 1.0f);
m_ShaderManager->getDeferredBuffers()->ClearDepthStencil(m_D3D->GetDeviceContext());
// Render the scene to the render buffers.
RenderSceneToTexture();
// Get the matrices.
m_D3D->GetWorldMatrix(worldMatrix);
m_Camera->GetBaseViewMatrix(baseViewMatrix);
m_D3D->GetOrthoMatrix(projectionMatrix);
// Turn off the Z buffer to begin all 2D rendering.
m_D3D->TurnZBufferOff();
// Put the full screen ortho window vertex and index buffers on the graphics pipeline to prepare them for drawing.
m_FullScreenWindow->Render(m_D3D->GetDeviceContext());
ID3D11ShaderResourceView* mainTarget = m_ShaderManager->getDeferredBuffers()->GetShaderResourceView(0);
ID3D11ShaderResourceView* horizontalBlurred = m_ShaderManager->getDeferredBuffers()->GetShaderResourceView(2);
ID3D11ShaderResourceView* depthMap = m_ShaderManager->getDeferredBuffers()->GetShaderResourceView(1);
ID3D11ShaderResourceView* finalTarget = m_ShaderManager->getDeferredBuffers()->GetShaderResourceView(3);
activeRenderTargets[1] = false; //depth map is never a render target again
for (int i = 0; i < numBlurs; ++i){
activeRenderTargets[0] = false; //main target is resource in this pass
activeRenderTargets[2] = true; //horizontal blurred target
activeRenderTargets[3] = false; //unbind final target
m_ShaderManager->getDeferredBuffers()->SetRenderTargets(m_D3D->GetDeviceContext(), activeRenderTargets);
m_ShaderManager->RenderScreenSpaceSSS_HorizontalBlur(m_D3D->GetDeviceContext(), m_FullScreenWindow->GetIndexCount(), worldMatrix, baseViewMatrix, projectionMatrix, mainTarget, depthMap);
activeRenderTargets[0] = true; //rendering to main target
activeRenderTargets[2] = false; //horizontal blurred is resource
activeRenderTargets[3] = true; //rendering to final target
m_ShaderManager->getDeferredBuffers()->SetRenderTargets(m_D3D->GetDeviceContext(), activeRenderTargets);
m_ShaderManager->RenderScreenSpaceSSS_VerticalBlur(m_D3D->GetDeviceContext(), m_FullScreenWindow->GetIndexCount(), worldMatrix, baseViewMatrix, projectionMatrix, horizontalBlurred, depthMap);
}
m_D3D->SetBackBufferRenderTarget();
m_D3D->BeginScene(0.0f, 0.0f, 0.5f, 1.0f);
// Reset the viewport back to the original.
m_D3D->ResetViewport();
m_ShaderManager->RenderTextureShader(m_D3D->GetDeviceContext(), m_FullScreenWindow->GetIndexCount(), worldMatrix, baseViewMatrix, projectionMatrix, depthMap);
m_D3D->TurnZBufferOn();
m_D3D->EndScene();
And, finally, here are 3 screenshots from my graphics log.
They show rendering the scene onto the mainTarget, a verticalPass which takes as input the horizontalBlurred resource and finally, rendering onto the backBuffer, which is what's failing. You can see the resource bound to the shader and how the output is just a black screen. I purposedly set the background as red to find out if it was sampling with wrong coordinates, but nope.
So, has anyone ever experienced something like this? What could be the cause of this bug?
Thanks in advance for any help!
EDIT: The Render_SOMETHING_SOMETHING_shader methods handle binding all the resources, setting the shaders, draw calls etc etc. If necessary I can post them here, but I don't think it's that relevant.

Transparency on two rectangles in DirectX, one behind another - I see the background of window instead the second texture

I have an DirectX 11 C++ application that displays two rectangles with textures and some text.
Both textures are taken from TGA resources (with alpha channel added).
When I run the program, I get the result:
What's wrong? Take a closer look:
The corners of the rectangles are transparent (and they should be). The rest of textures are set to be 30% opacity (and it works well too).
But, when one texture (let's call it texture1) is over another (texture2):
The corners of texture1 are transparent.
But behind them I see the background of window, instead of texture2.
In other words, transparency of texture interacts with the background of window, not with the textures behind it.
What have I done wrong? What part of my program can be responsible for it? Blending options, render states, shader code...?
In my shader, I set:
technique10 RENDER{
pass P0{
SetVertexShader(CompileShader( vs_4_0, VS()));
SetPixelShader(CompileShader( ps_4_0, PS()));
SetBlendState(SrcAlphaBlendingAdd, float4(0.0f, 0.0f, 0.0f, 0.0f),
0xFFFFFFFF);
}
}
P.s.
Of course, when I change the background of window from blue to another colour, the elements still have the transparency (the corners aren't blue).
edit:
According to #ComicSansMS (+ for nick, anyway ;p ), I've tried to change to order of render elements (I've also moved the smaller texture a bit, to check if the error remains):
The smaller texture is now behind the bigger one. But the problem with corners remains (now it appears on the second texture). I am almost sure that I draw the rectangle behind BEFORE I render the rectangle above (I see the code's lines order).
My depth stencil:
//initialize the description of the stencil state
ZeroMemory(depthStencilsDescs, sizeof(*depthStencilsDescs));
//set up the description of the stencil state
depthStencilsDescs->DepthEnable = true;
depthStencilsDescs->DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL;
depthStencilsDescs->DepthFunc = D3D11_COMPARISON_LESS;
depthStencilsDescs->StencilEnable = true;
depthStencilsDescs->StencilReadMask = 0xFF;
depthStencilsDescs->StencilWriteMask = 0xFF;
//stencil operations if pixel is front-facing
depthStencilsDescs->FrontFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
depthStencilsDescs->FrontFace.StencilDepthFailOp = D3D11_STENCIL_OP_INCR;
depthStencilsDescs->FrontFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
depthStencilsDescs->FrontFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
//stencil operations if pixel is back-facing
depthStencilsDescs->BackFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
depthStencilsDescs->BackFace.StencilDepthFailOp = D3D11_STENCIL_OP_DECR;
depthStencilsDescs->BackFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
depthStencilsDescs->BackFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
//create the depth stencil state
result = device->CreateDepthStencilState(depthStencilsDescs, depthStencilState2D);
The render function:
...
//clear the back buffer
context->ClearRenderTargetView(myRenderTargetView, backgroundColor); //backgroundColor
//clear the depth buffer to 1.0 (max depth)
context->ClearDepthStencilView(depthStencilView, D3D11_CLEAR_DEPTH, 1.0f, 0);
context->OMSetDepthStencilState(depthStencilState2D, 1);
context->VSSetShader(getVertexShader(), NULL, 0);
context->PSSetShader(getPixelShader(), NULL, 0);
for(...){
rectangles[i]->render();
}
The blend state:
D3D11_BLEND_DESC blendDesc;
ZeroMemory(&blendDesc, sizeof(D3D11_BLEND_DESC) );
blendDesc.AlphaToCoverageEnable = false;
blendDesc.IndependentBlendEnable = false;
blendDesc.RenderTarget[0].BlendEnable = true;
blendDesc.RenderTarget[0].SrcBlend = D3D11_BLEND_SRC_ALPHA;
blendDesc.RenderTarget[0].DestBlend = D3D11_BLEND_INV_SRC_ALPHA;
blendDesc.RenderTarget[0].BlendOp = D3D11_BLEND_OP_ADD;
blendDesc.RenderTarget[0].SrcBlendAlpha = D3D11_BLEND_ONE;
blendDesc.RenderTarget[0].DestBlendAlpha = D3D11_BLEND_ONE;
blendDesc.RenderTarget[0].BlendOpAlpha = D3D11_BLEND_OP_ADD;
blendDesc.RenderTarget[0].RenderTargetWriteMask = D3D11_COLOR_WRITE_ENABLE_ALL ;
ID3D11BlendState * blendState;
if (FAILED(device->CreateBlendState(&blendDesc, &blendState))){
}
context->OMSetBlendState(blendState,NULL,0xffffffff);
Your draw order is probably wrong.
Blending does not interact with the pixels of the object behind it, it interacts with the pixels that are currently in the frame buffer.
So if you draw the rectangle in front before the rectangle in the back, its blending operation will interact with what is in the frame buffer at that point (that is, the background).
The solution is obviously to sort your objects by their depth in view space and draw from back-to-front, although that is sometimes easier said than done (especially when allowing arbitrary overlaps).
The other problem seems to be that you draw both rectangles at the same depth value. Your depth test is set to D3D11_COMPARISON_LESS, so as soon as a triangle is drawn on a pixel, the other triangle will fail the depth test for that pixel and won't get drawn at all. This is explains the results you get when swapping the drawing order.
Note that if you draw objects back-to-front there is no need to perform a depth test at all, so you might just want to switch it off in this case. Alternatively, you can try to arrange the objects along the depth axis by giving them different Z values, or just switch to D3D11_COMPARISON_LESS_EQUAL for the depth test.

DirectX alpha masking

I'm working on a game using DirectX 9. Here's what I'm trying to do:
After the scene is rendered, on top of it I want to render few sprites: a black cover on entire scene and a few sprites, which are masks showing where the cover should have holes. So far I tried messing with blend mode but with no luck. My code setting it up looks like this:
D3DD->SetRenderState(D3DRS_ALPHABLENDENABLE, TRUE);
D3DD->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_SRCALPHA);
D3DD->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA);
I guess the best way would be to multiply each sprites alpha, but according to http://msdn.microsoft.com/en-us/library/windows/desktop/bb172508%28v=vs.85%29.aspx no such mode is supported. Is There another way to do this?
edit
Following Nico Schertler's answer, here's the code I came up with:
LPDIRECT3DTEXTURE9 pRenderTexture;
LPDIRECT3DSURFACE9 pRenderSurface,
pBackBuffer;
// create texture
D3DD->CreateTexture(1024,
1024,
1,
D3DUSAGE_RENDERTARGET,
D3DFMT_R5G6B5,
D3DPOOL_DEFAULT,
&pRenderTexture,
NULL);
pRenderTexture->GetSurfaceLevel(0,&pRenderSurface);
// store old render target - back buffer
D3DD->GetRenderTarget(0,&pBackBuffer);
// set new render target - texture
D3DD->SetRenderTarget(0,pRenderSurface);
//clear texture to opaque black
D3DD->Clear(0,
NULL,
D3DCLEAR_TARGET | D3DCLEAR_ZBUFFER,
D3DCOLOR_XRGB(0,0,0),
32.0f,
0);
// set blending
D3DD->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_ZERO);
D3DD->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_ONE);
D3DD->SetRenderState(D3DRS_SRCBLENDALPHA, D3DBLEND_ZERO);
D3DD->SetRenderState(D3DRS_DESTBLENDALPHA, D3DBLEND_SRCALPHA);
D3DD->SetRenderState(D3DRS_SEPARATEALPHABLENDENABLE, TRUE);
//// now I render hole sprites the usual way
// restore back buffe as render target
D3DD->SetRenderTarget(0,pBackBuffer);
// restore blending
D3DD->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_SRCALPHA);
D3DD->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA);
D3DD->SetRenderState(D3DRS_SRCBLENDALPHA, D3DBLEND_SRCALPHA);
D3DD->SetRenderState(D3DRS_DESTBLENDALPHA, D3DBLEND_INVSRCALPHA);
D3DD->SetRenderState(D3DRS_SEPARATEALPHABLENDENABLE, FALSE);
ulong color = ulong(-1);
Vertex2D v[4];
v[0] = Vertex2D(0, 0, 0);
v[1] = Vertex2D(1023, 0, 0);
v[3] = Vertex2D(1023, 1023, 0);
v[2] = Vertex2D(0, 1023, 0);
D3DD->SetTexture(0, pRenderTexture);
D3DD->SetFVF(D3DFVF_XYZRHW | D3DFVF_DIFFUSE | D3DFVF_TEX1);
D3DD->DrawPrimitiveUP(D3DPT_TRIANGLESTRIP, 2, v, sizeof(Vertex2D));
D3DD->SetTexture(0, NULL);
// release used resources
pRenderTexture->Release();
pRenderSurface->Release();
pBackBuffer->Release();
Unfortunatelly, the app crashes when restoring the old render target. Any advice?
Firstly, you should create the mask in a separate texture first. Then you can add the holes as needed. Finally, draw the mask on the screen:
Initialize the texture
Clear it to opaque black
Using the following blend states:
D3DRS_SRCBLEND -> D3DBLEND_ZERO (hole's color does not matter)
D3DRS_DESTBLEND -> D3DBLEND_ONE (preserve the black color)
D3DRS_SRCBLENDALPHA -> D3DBLEND_ZERO
D3DRS_DESTBLENDALPHA -> D3DBLEND_SRCALPHA
D3DRS_SEPARATEALPHABLENDENABLE -> TRUE
Draw each hole sprite
Restore default blending (src_alpha / inv_src_alpha)
Render the texture as a sprite to the back buffer
The above blend state assumes that the holes are opaque where there should be a hole. Then, the color is calculated by:
blended color = 0 * hole sprite color + 1 * background color
which should always be black.
And the alpha channel is calculated by:
blended alpha = 0 * hole sprite alpha + (1 - hole sprite alpha) * background alpha
So where the hole sprite is opaque, the blended alpha becomes 0. Where it is transparent, the blended alpha is the previous value. Values in between are blended.