Wrong depth buffer (to texture) output? - c++

For the SSAO effect I have to generate two textures: normals (in view space) and depth.
I decided to use depth buffer as texture according to Microsoft tutorial (the Reading the Depth-Stencil Buffer as a Texture chapter).
Unfortunately, after rendering I got none information from the depth buffer (the lower image):
I guess it's not right. And what is strange, the depth buffer seems to work (I get the right order of faces etc.).
The depth buffer code:
//create depth stencil texture (depth buffer)
D3D11_TEXTURE2D_DESC descDepth;
ZeroMemory(&descDepth, sizeof(descDepth));
descDepth.Width = width;
descDepth.Height = height;
descDepth.MipLevels = 1;
descDepth.ArraySize = 1;
descDepth.Format = DXGI_FORMAT_R24G8_TYPELESS;
descDepth.SampleDesc.Count = antiAliasing.getCount();
descDepth.SampleDesc.Quality = antiAliasing.getQuality();
descDepth.Usage = D3D11_USAGE_DEFAULT;
descDepth.BindFlags = D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE;
descDepth.CPUAccessFlags = 0;
descDepth.MiscFlags = 0;
ID3D11Texture2D* depthStencil = NULL;
result = device->CreateTexture2D(&descDepth, NULL, &depthStencil);
ERROR_HANDLE(SUCCEEDED(result), L"Could not create depth stencil texture.", MOD_GRAPHIC);
D3D11_SHADER_RESOURCE_VIEW_DESC shaderResourceViewDesc;
//setup the description of the shader resource view
shaderResourceViewDesc.Format = DXGI_FORMAT_R24_UNORM_X8_TYPELESS;
shaderResourceViewDesc.ViewDimension = antiAliasing.isOn() ? D3D11_SRV_DIMENSION_TEXTURE2DMS : D3D11_SRV_DIMENSION_TEXTURE2D;
shaderResourceViewDesc.Texture2D.MostDetailedMip = 0;
shaderResourceViewDesc.Texture2D.MipLevels = 1;
//create the shader resource view.
ERROR_HANDLE(SUCCEEDED(device->CreateShaderResourceView(depthStencil, &shaderResourceViewDesc, &depthStencilShaderResourceView)),
L"Could not create shader resource view for depth buffer.", MOD_GRAPHIC);
createDepthStencilStates();
//set the depth stencil state.
context->OMSetDepthStencilState(depthStencilState3D, 1);
D3D11_DEPTH_STENCIL_VIEW_DESC depthStencilViewDesc;
// Initialize the depth stencil view.
ZeroMemory(&depthStencilViewDesc, sizeof(depthStencilViewDesc));
// Set up the depth stencil view description.
depthStencilViewDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT;
depthStencilViewDesc.ViewDimension = antiAliasing.isOn() ? D3D11_DSV_DIMENSION_TEXTURE2DMS : D3D11_DSV_DIMENSION_TEXTURE2D;
depthStencilViewDesc.Texture2D.MipSlice = 0;
//depthStencilViewDesc.Flags = D3D11_DSV_READ_ONLY_DEPTH;
// Create the depth stencil view.
result = device->CreateDepthStencilView(depthStencil, &depthStencilViewDesc, &depthStencilView);
ERROR_HANDLE(SUCCEEDED(result), L"Could not create depth stencil view.", MOD_GRAPHIC);
After rendering with first pass, I set the depth stencil as texture resource along with other render targets (color, normals), appending it to array:
ID3D11ShaderResourceView ** textures = new ID3D11ShaderResourceView *[targets.size()+1];
for (unsigned i = 0; i < targets.size(); i++) {
textures[i] = targets[i]->getShaderResourceView();
}
textures[targets.size()] = depthStencilShaderResourceView;
context->PSSetShaderResources(0, targets.size()+1, textures);
Before second pass I call context->OMSetRenderTargets(1, &myRenderTargetView, NULL); to unbind depth buffer (so I can use it as texture).
Then, I render my textures (render targets from first pass + depth buffer) with trivial post-process shader, just for debug purpose (second pass):
Texture2D ColorTexture[3];
SamplerState ObjSamplerState;
float4 main(VS_OUTPUT input) : SV_TARGET0{
float4 Color;
Color = float4(0, 1, 1, 1);
float2 textureCoordinates = input.textureCoordinates.xy * 2;
if (input.textureCoordinates.x < 0.5f && input.textureCoordinates.y < 0.5f) {
Color = ColorTexture[0].Sample(ObjSamplerState, textureCoordinates);
}
if (input.textureCoordinates.x > 0.5f && input.textureCoordinates.y < 0.5f) {
textureCoordinates.x -= 0.5f;
Color = ColorTexture[1].Sample(ObjSamplerState, textureCoordinates);
}
if (input.textureCoordinates.x < 0.5f && input.textureCoordinates.y > 0.5f) { //depth texture
textureCoordinates.y -= 0.5f;
Color = ColorTexture[2].Sample(ObjSamplerState, textureCoordinates);
}
...
It works fine for normals texture. Why it doesn't for depth buffer (as shader resource view)?

As per comments:
The texture was rendered and sampled correctly but the data appeared to be uniformly red due to the data lying between 0.999 and 1.0f.
There are a few things you can do to improve the available depth precision, but the simplest of which is to simply ensure your near and far clip distances are not excessively small/large for the scene you're drawing.
Assuming metres are your unit, a near clip of 0.1 (10cm) and a far clip of 200 (metres) are much more reasonable than 1cm and 20km.
Even so, don't expect to see too many black/dark areas, the non linear nature of a z-buffer is still going to mean most of your depth values are shunted up towards 1. If visualisation of the depth buffer is important then simply rescale the data to the normalised 0-1 range before displaying it.

Related

DirectX Texture Not Drawing Correctly

I'm trying to render a texture to the screen using DirectX without DirectXTK.
This is the texture that I am trying to render on screen (512x512px):
The texture loads correctly but when it is put on the screen, it comes up like this:
I noticed that the rendered image seems to be the texture split four times in the x-direction and many times in the y-direction. The tiles seem to increase in height as the texture is rendered farther down the screen.
I have two thoughts as to how the texture was rendered incorrectly.
I could have initialized the texture incorrectly.
I could have improperly setup my texture sampler.
Regarding improper texture initialization, here is the code that I used to initialize the texture.
Texture2D & Shader Resource View Creation Code
Load Texture Data
This loads the texture for a PNG file into a vector of unsigned chars and sets the width and height of the texture.
std::vector<unsigned char> fileData;
if (!loadFileToBuffer(fileName, fileData))
return nullptr;
std::vector<unsigned char> imageData;
unsigned long width;
unsigned long height;
decodePNG(imageData, width, height, fileData.data(), fileData.size());
Create Texture Description
D3D11_TEXTURE2D_DESC texDesc;
ZeroMemory(&texDesc, sizeof(D3D11_TEXTURE2D_DESC));
texDesc.Width = width;
texDesc.Height = height;
texDesc.MipLevels = 1;
texDesc.ArraySize = 1;
texDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
texDesc.SampleDesc.Count = 1;
texDesc.SampleDesc.Quality = 0;
texDesc.Usage = D3D11_USAGE_DYNAMIC;
texDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
texDesc.CPUAccessFlags = D3D10_CPU_ACCESS_WRITE;
Assign Texture Subresource Data
D3D11_SUBRESOURCE_DATA texData;
ZeroMemory(&texData, sizeof(D3D11_SUBRESOURCE_DATA));
texData.pSysMem = (void*)imageData.data();
texData.SysMemPitch = sizeof(unsigned char) * width;
//Create DirectX Texture In The Cache
HR(m_pDevice->CreateTexture2D(&texDesc, &texData, &m_textures[fileName]));
Create Shader Resource View for Texture
D3D11_SHADER_RESOURCE_VIEW_DESC srDesc;
ZeroMemory(&srDesc, sizeof(D3D11_SHADER_RESOURCE_VIEW_DESC));
srDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
srDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
srDesc.Texture2D.MipLevels = 1;
HR(m_pDevice->CreateShaderResourceView(m_textures[fileName], &srDesc,
&m_resourceViews[fileName]));
return m_resourceViews[fileName];//This return value is used as "texture" in the next line
Use The Texture Resource
m_pDeviceContext->PSSetShaderResources(0, 1, &texture);
I have messed around with the MipLevels and SampleDesc.Quality variables to see if they were changing something about the texture but changing them either made the texture black or did nothing to change it.
I also looked into the the SysMemPitch variable and made sure that it aligned with MSDN
Regarding setting up my sampler incorrectly, here is the code that I used to initialize my sampler.
//Setup Sampler
D3D11_SAMPLER_DESC samplerDesc;
ZeroMemory(&samplerDesc, sizeof(D3D11_SAMPLER_DESC));
samplerDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR;
samplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_CLAMP;
samplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_CLAMP;
samplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_CLAMP;
samplerDesc.MipLODBias = 0.0f;
samplerDesc.MaxAnisotropy = 1;
samplerDesc.ComparisonFunc = D3D11_COMPARISON_NEVER;
samplerDesc.BorderColor[0] = 1.0f;
samplerDesc.BorderColor[1] = 1.0f;
samplerDesc.BorderColor[2] = 1.0f;
samplerDesc.BorderColor[3] = 1.0f;
samplerDesc.MinLOD = -FLT_MAX;
samplerDesc.MaxLOD = FLT_MAX;
HR(m_pDevice->CreateSamplerState(&samplerDesc, &m_pSamplerState));
//Use the sampler
m_pDeviceContext->PSSetSamplers(0, 1, &m_pSamplerState);
I have tried different AddressU/V/W types to see if the texture was loaded with incorrect width/height and was thus shrunk but changing these did nothing.
My VertexShader passes the texture coordinates through using TEXCOORD0 and my PixelShader uses texture.Sample(samplerState, input.texCoord); to get the color of the pixel.
In summary, I am trying to render a texture but the texture gets tiled and I am not able to figure out why. What do I need to change/do to render just one of my texture?
I think you assign the wrong pitch:
texData.SysMemPitch = sizeof(unsigned char) * width;
should be
texData.SysMemPitch = 4 * sizeof(unsigned char) * width;
because each pixels has DXGI_FORMAT_R8G8B8A8_UNORM format and occupies 4 bytes.

Problems sampling D3D11 depth buffer

I'm getting everything ready in a little DirectX 11.0 project of mine for a deferred rendering pipeline. However, I've been having quite a lot of trouble with sampling the depth buffer from within a pixel shader.
First I define the depth texture and its shader resource view:
D3D11_TEXTURE2D_DESC depthTexDesc;
ZeroMemory(&depthTexDesc, sizeof(depthTexDesc));
depthTexDesc.Width = nWidth;
depthTexDesc.Height = nHeight;
depthTexDesc.Format = DXGI_FORMAT_R32_TYPELESS;
depthTexDesc.Usage = D3D11_USAGE_DEFAULT;
depthTexDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE;
depthTexDesc.MipLevels = 1;
depthTexDesc.ArraySize = 1;
depthTexDesc.SampleDesc.Count = 1;
depthTexDesc.SampleDesc.Quality = 0;
depthTexDesc.CPUAccessFlags = 0;
depthTexDesc.MiscFlags = 0;
hresult = d3dDevice_->CreateTexture2D(&depthTexDesc, nullptr, &depthTexture_);
D3D11_DEPTH_STENCIL_VIEW_DESC DSVDesc;
ZeroMemory(&DSVDesc, sizeof(DSVDesc));
DSVDesc.Format = DXGI_FORMAT_D32_FLOAT;
DSVDesc.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2D;
DSVDesc.Texture2D.MipSlice = 0;
hresult = d3dDevice_->CreateDepthStencilView(depthTexture_, &DSVDesc, &depthView_);
D3D11_SHADER_RESOURCE_VIEW_DESC gbDepthTexDesc;
ZeroMemory(&gbDepthTexDesc, sizeof(gbDepthTexDesc));
gbDepthTexDesc.Format = DXGI_FORMAT_R32_FLOAT;
gbDepthTexDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
gbDepthTexDesc.Texture2D.MipLevels = 1;
gbDepthTexDesc.Texture2D.MostDetailedMip = -1;
d3dDevice_->CreateShaderResourceView(depthTexture_, &gbDepthTexDesc, &gbDepthView_);
Here's the relevant part of my rendering function:
float clearColor[4] = { 0.0f, 0.0f, 0.0f, 1.0f };
d3dContext_->ClearRenderTargetView(backBufferTarget_, clearColor);
d3dContext_->ClearDepthStencilView(depthView_, D3D11_CLEAR_DEPTH, 1.0f, 0);
// GBuffer packing pass (in the future): /////////////////////////////////////////
d3dContext_->OMSetRenderTargets(1, &backBufferTarget_, depthView_);
unsigned int nStride = sizeof(Vertex);
unsigned int nOffset = 0;
d3dContext_->IASetInputLayout(inputLayout_);
d3dContext_->IASetVertexBuffers(0, 1, &vertexBuffer_, &nStride, &nOffset);
d3dContext_->IASetIndexBuffer(indexBuffer_, DXGI_FORMAT_R32_UINT, 0);
d3dContext_->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
d3dContext_->VSSetShader(firstVS_, 0, 0);
d3dContext_->PSSetShader(firstPS_, 0, 0);
d3dContext_->DrawIndexed(nIndexCount_, 0, 0);
d3dContext_->OMSetRenderTargets(1, &backBufferTarget_, nullptr);
d3dContext_->VSSetShader(secondVS_, 0, 0);
d3dContext_->PSSetShader(secondPS_, 0, 0);
d3dContext_->PSGetShaderResources(0, 1, &gbDepthView_);
d3dContext_->PSSetSamplers(0, 1, &colorMapSampler_);
d3dContext_->DrawIndexed(nIndexCount_, 0, 0);
swapChain_->Present(0, 0);
In this temporary implementation, firstVS_ and secondVS_ are identical, and their only function is to do all the transforms and pass on the data to the PSs.
And finally, here are firstPS_ and secondPS_:
// firstPS_
float4 main(PS_Input frag) : SV_TARGET
{
return float4(1.0f, 1.0f, 1.0f, 1.0f);
}
// secondPS_
Texture2D<float> depthMap_ : register(t0);
SamplerState colorSampler_ : register(s0);
float4 main(PS_Input frag) : SV_TARGET
{
float4 psOut;
psOut.xyz = depthMap_.Sample(colorSampler_, frag.tex0).xxx;
psOut.w = 1.0f;
return psOut;
}
So, my actual questions:
1) All this code compiles without any issues, but when I sample the depth buffer, it just turns out black. I read this could be caused by having your depth & stencil view bound by D3D11DeviceContext::OMSetRenderTargets() at the time you want to sample the depth buffer. I fixed that, but the buffer is still black. I checked the graphics debugger, with no success. So, is my depth buffer not getting written correctly, or am I sampling the wrong way? firstPS_ works fine.
2) Speaking of sampling, the book I'm using just says "we'll be using a point sampler," but I have no idea what is exactly meant. Now I'm just using a standard texture map sampler, but is there something else I should sample with?
3) Also, the book uses the SamplerState.Gather() function in secondPS_, but when I tried that it complained that "the expression could not be mapped to pixel shader instruction set." Is Gather() an error in the book, or is it my GPU (D3D feature level 11.0) who doesn't understand what that is? Is Sample() good enough for what I want to do? The original use of Gather() was in the context of creating a silhouette around objects in the depth buffer.
4) I tried to get secondVS_ to draw nothing but a full-screen quad, but FXC complained about my use of SV_VertexID as "invalid," saying that my type should be integral, even though it already was. I read somewhere that SV_VertexID can only be used by the first VS in a pipeline. Is that the problem here? How do I solve this in this particular case? In my current inefficient solution, is the problem being caused by the UVs?
1) You've called PSGetShaderResources instead of PSSetShaderResources. Also, MostDetailedMip should be 0 not -1.
2) A "point sampler" is just a texture sampler with the FILTER field set to something like D3D11_FILTER_MIN_MAG_MIP_POINT.
3) Gather is a function on the Texture2D not the SamplerState as you stated.
4) You get this error if you compile using vs_4_0, try vs_5_0.

Rendering to texture - ClearRenderTargetView() works, but none objects are rendered to texture (rendering to screen works fine)

I try to render the scene to texture which should be then displayed in corner of the screen.
I though that I can do that this way:
Render the scene (my Engine::render() method that will set shaders and make draw calls) - works ok.
Change render target to the texture.
Render the scene again - does not work. The context->ClearRenderTargetView(texture->getRenderTargetView(), { 1.0f, 0.0f, 0.0f, 1.0f } ) does set my texture to red color (for scene in step 1. I use different color), but none objects are being rendered on it.
Change render target back to original.
Render the scene for the last time, with rectangle at corner that has the texture I've rendered in step 3. - works ok. I see the scene, the little rectangle in the corner too. The problem is, it's just red (something went wrong with rendering in step 3., I guess).
The result (there should be "image in image" instead of red rectangle):
The code for steps 2. - 4.:
context->OMSetRenderTargets(1, &textureRenderTargetView, depthStencilView);
float bg[4] = { 1.0f, 0.0f, 0.0f, 1.0f };
context->ClearRenderTargetView(textureRenderTargetView, bg); //backgroundColor - red, green, blue, alpha
render();
context->OMSetRenderTargets(1, &myRenderTargetView, depthStencilView); //bind render target back to previous value (not to texture)
The render() method does not change (it works in step 1., why it doesn't work when I render to texture?) and ends with swapChain->Present(0, 0).
I know that ClearRenderTargetView affects my texture (without it, it's doesn't change color to red). But the rest of rendering either do not output to it or there's another problem.
Did I miss something?
I create the texture, shader resource view and render target for it based on this tutorial (maybe there is an error in my D3D11_TEXTURE2D_DESC?):
D3D11_TEXTURE2D_DESC textureDesc;
D3D11_RENDER_TARGET_VIEW_DESC renderTargetViewDesc;
D3D11_SHADER_RESOURCE_VIEW_DESC shaderResourceViewDesc;
//1. create render target
ZeroMemory(&textureDesc, sizeof(textureDesc));
//setup the texture description
//we will need to have this texture bound as a render target AND a shader resource
textureDesc.Width = size.getX();
textureDesc.Height = size.getY();
textureDesc.MipLevels = 1;
textureDesc.ArraySize = 1;
textureDesc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT;
textureDesc.SampleDesc.Count = 1;
textureDesc.Usage = D3D11_USAGE_DEFAULT;
textureDesc.BindFlags = D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE;
textureDesc.CPUAccessFlags = 0;
textureDesc.MiscFlags = 0;
//create the texture
device->CreateTexture2D(&textureDesc, NULL, &textureRenderTarget);
//2. create render target view
//setup the description of the render target view.
renderTargetViewDesc.Format = textureDesc.Format;
renderTargetViewDesc.ViewDimension = D3D11_RTV_DIMENSION_TEXTURE2D;
renderTargetViewDesc.Texture2D.MipSlice = 0;
//create the render target view
device->CreateRenderTargetView(textureRenderTarget, &renderTargetViewDesc, &textureRenderTargetView);
//3. create shader resource view
//setup the description of the shader resource view.
shaderResourceViewDesc.Format = textureDesc.Format;
shaderResourceViewDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
shaderResourceViewDesc.Texture2D.MostDetailedMip = 0;
shaderResourceViewDesc.Texture2D.MipLevels = 1;
//create the shader resource view.
device->CreateShaderResourceView(textureRenderTarget, &shaderResourceViewDesc, &texture);
The depth buffer:
D3D11_TEXTURE2D_DESC descDepth;
ZeroMemory(&descDepth, sizeof(descDepth));
descDepth.Width = width;
descDepth.Height = height;
descDepth.MipLevels = 1;
descDepth.ArraySize = 1;
descDepth.Format = DXGI_FORMAT_D24_UNORM_S8_UINT;
descDepth.SampleDesc.Count = sampleCount;
descDepth.SampleDesc.Quality = maxQualityLevel;
descDepth.Usage = D3D11_USAGE_DEFAULT;
descDepth.BindFlags = D3D11_BIND_DEPTH_STENCIL;
descDepth.CPUAccessFlags = 0;
descDepth.MiscFlags = 0;
And here goes the swap chain:
DXGI_SWAP_CHAIN_DESC sd;
ZeroMemory(&sd, sizeof(sd));
sd.BufferCount = 1;
sd.BufferDesc.Width = width;
sd.BufferDesc.Height = height;
sd.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
sd.BufferDesc.RefreshRate.Numerator = numerator; //60
sd.BufferDesc.RefreshRate.Denominator = denominator; //1
sd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
sd.OutputWindow = *hwnd;
sd.SampleDesc.Count = sampleCount; //1 (and 0 for quality) to turn off multisampling
sd.SampleDesc.Quality = maxQualityLevel;
sd.Windowed = fullScreen ? FALSE : TRUE;
sd.Flags = DXGI_SWAP_CHAIN_FLAG_ALLOW_MODE_SWITCH; //allow full-screen switchin
// Set the scan line ordering and scaling to unspecified.
sd.BufferDesc.ScanlineOrdering = DXGI_MODE_SCANLINE_ORDER_UNSPECIFIED;
sd.BufferDesc.Scaling = DXGI_MODE_SCALING_UNSPECIFIED;
// Discard the back buffer contents after presenting.
sd.SwapEffect = DXGI_SWAP_EFFECT_DISCARD;
I create the default render target view that way:
//create a render target view
ID3D11Texture2D* pBackBuffer = NULL;
result = swapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), (LPVOID*)&pBackBuffer);
ERROR_HANDLE(SUCCEEDED(result), L"The swapChain->GetBuffer() failed.", MOD_GRAPHIC);
//Create the render target view with the back buffer pointer.
result = device->CreateRenderTargetView(pBackBuffer, NULL, &myRenderTargetView);
After some debugging, as #Gnietschow suggested, I have found an error:
D3D11 ERROR: ID3D11DeviceContext::OMSetRenderTargets:
The RenderTargetView at slot 0 is not compatable with the
DepthStencilView. DepthStencilViews may only be used with
RenderTargetViews if the effective dimensions of the Views are equal,
as well as the Resource types, multisample count, and multisample
quality.
The RenderTargetView at slot 0 has (w:1680,h:1050,as:1), while the
Resource is a Texture2D with (mc:1,mq:0).
The DepthStencilView has
(w:1680,h:1050,as:1), while the Resource is a Texture2D with
(mc:8,mq:16).
So basically, my render target (texture) was not using anti-aliasing while my back buffer/depth buffer do.
I had to change SampleDesc.Count to 1 and SampleDesc.Quality to 0 in both DXGI_SWAP_CHAIN_DESC and D3D11_TEXTURE2D_DESC to match the values from texture to which I render. In other words I had to turn off anti-aliasing when rendering to texture.
I wonder, why render to texture does not support anti-aliasing? When I set SampleDesc.Count and SampleDesc.Quality to my standard values (8 and 16, those works fine on my GPU when rendering the scene) for my texture render target, the device->CreateTexture2D(...) fails with "invalid parameter" (even when I use those same values everywhere).

RasterTek Drawing 2D Directx11 - I am having trouble With i think it is the Zbuffer

I am having trouble with the Z buffer, when i implement the 2D texture tut from rastertek, i get the image being drawn but when i navaigate the camera past a certain z pos, the 2D texture just do not show.
here is my code for the depthstencil
D3DXMatrixOrthoLH(&m_orthoMatrix, (float)screenWidth, (float)screenHeight, screenNear, screenDepth);
// Clear the second depth stencil state before setting the parameters.
ZeroMemory(&depthDisabledStencilDesc, sizeof(depthDisabledStencilDesc));
// Now create a second depth stencil state which turns off the Z buffer for 2D rendering. The only difference is
// that DepthEnable is set to false, all other parameters are the same as the other depth stencil state.
depthDisabledStencilDesc.DepthEnable = false;
depthDisabledStencilDesc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL;
depthDisabledStencilDesc.DepthFunc = D3D11_COMPARISON_LESS;
depthDisabledStencilDesc.StencilEnable = true;
depthDisabledStencilDesc.StencilReadMask = 0xFF;
depthDisabledStencilDesc.StencilWriteMask = 0xFF;
depthDisabledStencilDesc.FrontFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
depthDisabledStencilDesc.FrontFace.StencilDepthFailOp = D3D11_STENCIL_OP_INCR;
depthDisabledStencilDesc.FrontFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
depthDisabledStencilDesc.FrontFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
depthDisabledStencilDesc.BackFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
depthDisabledStencilDesc.BackFace.StencilDepthFailOp = D3D11_STENCIL_OP_DECR;
depthDisabledStencilDesc.BackFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
depthDisabledStencilDesc.BackFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
heres the code for the zBuffer
void D3DClass::TurnZBufferOn()
{
m_deviceContext->OMSetDepthStencilState(m_depthStencilState, 1);
return;
}
void D3DClass::TurnZBufferOff()
{
m_deviceContext->OMSetDepthStencilState(m_depthDisabledStencilState, 1);
return;
}
I have been stuck on for soo long please help!!!

Missing some colors from PNG texture in DirectX during loading and saving?

I use standard DirectX functions (like CreateTexture2D, D3DX11SaveTextureToFile and D3DX11CreateShaderResourceViewFromFile) to load the PNG image, render it on new created texture and than save to file. All the textures are the power of two sizes.
But during it, I have noticed, that some colors from PNG are a little corrupted (similar but not the same as the colors from the source texture). The same with transparency (it works for 0 and 100% transparency parts, but not for e.g. 34%).
Are there some big color-approximations or I do something wrong? If so, how can I solve it?
Here are these two images (left is source: a little different colors and some gradient transparency on the bottom; right is image after loading first image and render it on the new texture, that was than saved to file):
I don't know what cause that behaviour, maybe the new texture's description:
textureDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
I have tried to change it to DXGI_FORMAT_R32G32B32A32_FLOAT, but the effect was even stranger:
Here is the code for rendering source texture on the new texture:
context->OMSetRenderTargets(1, &renderTargetView, depthStencilView); //to render on new texture instead of the screen
float clearColor[4] = {0.0f, 0.0f, 0.0f, 0.0f}; //red, green, blue, alpha
context->ClearRenderTargetView(renderTargetView, clearColor);
//clear the depth buffer to 1.0 (max depth)
context->ClearDepthStencilView(depthStencilView, D3D11_CLEAR_DEPTH, 1.0f, 0);
//rendering
turnZBufferOff();
shader->set(context);
object->render(shader, camera, textureManager, context, 0);
swapChain->Present(0, 0);
And in object->render():
UINT stride;
stride = sizeof(Vertex);
UINT offset = 0;
context->IASetVertexBuffers( 0, 1, &buffers->vertexBuffer, &stride, &offset ); //set vertex buffer
context->IASetIndexBuffer( buffers->indexBuffer, DXGI_FORMAT_R16_UINT, 0 ); //set index buffer
context->IASetPrimitiveTopology( D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST ); //set primitive topology
if(textureID){
context->PSSetShaderResources( 0, 1, &textureManager->get(textureID)->texture);
}
ConstantBuffer2DStructure cbPerObj;
cbPerObj.positionAndScale = XMFLOAT4(center.getX(), center.getY(), halfSize.getX(), halfSize.getY());
cbPerObj.textureCoordinates = XMFLOAT4(textureRectToUse[0].getX(), textureRectToUse[0].getY(), textureRectToUse[1].getX(), textureRectToUse[1].getY());
context->UpdateSubresource(constantBuffer, 0, NULL, &cbPerObj, 0, 0);
context->VSSetConstantBuffers(0, 1, &constantBuffer);
context->PSSetConstantBuffers(0, 1, &constantBuffer);
context->DrawIndexed(6, 0, 0);
The shader is very simple:
VS_OUTPUT VS(float4 inPos : POSITION, float2 inTexCoord : TEXCOORD)
{
VS_OUTPUT output;
output.Pos.zw = float2(0.0f, 1.0f);
//inPos(x,y) = {-1,1}
output.Pos.xy = (inPos.xy * positionAndScale.zw) + positionAndScale.xy;
output.TexCoord.xy = inTexCoord.xy * (textureCoordinates.zw - textureCoordinates.xy) + textureCoordinates.xy;
return output;
}
float4 PS(VS_OUTPUT input) : SV_TARGET
{
return ObjTexture.Sample(ObjSamplerState, input.TexCoord);
}
For some optimalisation I parse the sprite's size as the shader's param (it works fine, the size of texture, borders etc. are right).
Did you set blend states around? Alpha will not work by default since default blend is no blend at all.
Here is a standard alpha blend state:
D3D11_BLEND_DESC desc;
desc.AlphaToCoverageEnable=false;
desc.IndependentBlendEnable = false;
for (int i =0; i < 8 ; i++)
{
desc.RenderTarget[i].BlendEnable = true;
desc.RenderTarget[i].BlendOp = D3D11_BLEND_OP::D3D11_BLEND_OP_ADD;
desc.RenderTarget[i].BlendOpAlpha = D3D11_BLEND_OP::D3D11_BLEND_OP_ADD;
desc.RenderTarget[i].DestBlend = D3D11_BLEND::D3D11_BLEND_INV_SRC_ALPHA;
desc.RenderTarget[i].DestBlendAlpha = D3D11_BLEND::D3D11_BLEND_ONE;
desc.RenderTarget[i].RenderTargetWriteMask = D3D11_COLOR_WRITE_ENABLE::D3D11_COLOR_WRITE_ENABLE_ALL;
desc.RenderTarget[i].SrcBlend = D3D11_BLEND::D3D11_BLEND_SRC_ALPHA;
desc.RenderTarget[i].SrcBlendAlpha = D3D11_BLEND::D3D11_BLEND_ONE;
}
ID3D11BlendState* state;
device->CreateBlendState(&desc,&state);
return state;
Also I would use Clear with alpha component set to 1 instead of 0
I'm suggesting that your problems are stemming from importing a layered Fireworks PNG file. Fireworks layered PNGs retain their layers when imported into other softwares like Flash and Freehand. However, in order to have an exact replication of a layered Fireworks PNG in Photoshop, it's necessary to export that layered PNG as a flattened PNG. Thus, opening it in Photoshop and flattening it is not the solution; the solution lies in opening it and flattening it in Fireworks. (Note: PNGs can be 8, 24 or 32-bit...maybe that needs to be accounted for in your analysis.)