DirectX Texture Not Drawing Correctly - c++

I'm trying to render a texture to the screen using DirectX without DirectXTK.
This is the texture that I am trying to render on screen (512x512px):
The texture loads correctly but when it is put on the screen, it comes up like this:
I noticed that the rendered image seems to be the texture split four times in the x-direction and many times in the y-direction. The tiles seem to increase in height as the texture is rendered farther down the screen.
I have two thoughts as to how the texture was rendered incorrectly.
I could have initialized the texture incorrectly.
I could have improperly setup my texture sampler.
Regarding improper texture initialization, here is the code that I used to initialize the texture.
Texture2D & Shader Resource View Creation Code
Load Texture Data
This loads the texture for a PNG file into a vector of unsigned chars and sets the width and height of the texture.
std::vector<unsigned char> fileData;
if (!loadFileToBuffer(fileName, fileData))
return nullptr;
std::vector<unsigned char> imageData;
unsigned long width;
unsigned long height;
decodePNG(imageData, width, height, fileData.data(), fileData.size());
Create Texture Description
D3D11_TEXTURE2D_DESC texDesc;
ZeroMemory(&texDesc, sizeof(D3D11_TEXTURE2D_DESC));
texDesc.Width = width;
texDesc.Height = height;
texDesc.MipLevels = 1;
texDesc.ArraySize = 1;
texDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
texDesc.SampleDesc.Count = 1;
texDesc.SampleDesc.Quality = 0;
texDesc.Usage = D3D11_USAGE_DYNAMIC;
texDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
texDesc.CPUAccessFlags = D3D10_CPU_ACCESS_WRITE;
Assign Texture Subresource Data
D3D11_SUBRESOURCE_DATA texData;
ZeroMemory(&texData, sizeof(D3D11_SUBRESOURCE_DATA));
texData.pSysMem = (void*)imageData.data();
texData.SysMemPitch = sizeof(unsigned char) * width;
//Create DirectX Texture In The Cache
HR(m_pDevice->CreateTexture2D(&texDesc, &texData, &m_textures[fileName]));
Create Shader Resource View for Texture
D3D11_SHADER_RESOURCE_VIEW_DESC srDesc;
ZeroMemory(&srDesc, sizeof(D3D11_SHADER_RESOURCE_VIEW_DESC));
srDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
srDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
srDesc.Texture2D.MipLevels = 1;
HR(m_pDevice->CreateShaderResourceView(m_textures[fileName], &srDesc,
&m_resourceViews[fileName]));
return m_resourceViews[fileName];//This return value is used as "texture" in the next line
Use The Texture Resource
m_pDeviceContext->PSSetShaderResources(0, 1, &texture);
I have messed around with the MipLevels and SampleDesc.Quality variables to see if they were changing something about the texture but changing them either made the texture black or did nothing to change it.
I also looked into the the SysMemPitch variable and made sure that it aligned with MSDN
Regarding setting up my sampler incorrectly, here is the code that I used to initialize my sampler.
//Setup Sampler
D3D11_SAMPLER_DESC samplerDesc;
ZeroMemory(&samplerDesc, sizeof(D3D11_SAMPLER_DESC));
samplerDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR;
samplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_CLAMP;
samplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_CLAMP;
samplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_CLAMP;
samplerDesc.MipLODBias = 0.0f;
samplerDesc.MaxAnisotropy = 1;
samplerDesc.ComparisonFunc = D3D11_COMPARISON_NEVER;
samplerDesc.BorderColor[0] = 1.0f;
samplerDesc.BorderColor[1] = 1.0f;
samplerDesc.BorderColor[2] = 1.0f;
samplerDesc.BorderColor[3] = 1.0f;
samplerDesc.MinLOD = -FLT_MAX;
samplerDesc.MaxLOD = FLT_MAX;
HR(m_pDevice->CreateSamplerState(&samplerDesc, &m_pSamplerState));
//Use the sampler
m_pDeviceContext->PSSetSamplers(0, 1, &m_pSamplerState);
I have tried different AddressU/V/W types to see if the texture was loaded with incorrect width/height and was thus shrunk but changing these did nothing.
My VertexShader passes the texture coordinates through using TEXCOORD0 and my PixelShader uses texture.Sample(samplerState, input.texCoord); to get the color of the pixel.
In summary, I am trying to render a texture but the texture gets tiled and I am not able to figure out why. What do I need to change/do to render just one of my texture?

I think you assign the wrong pitch:
texData.SysMemPitch = sizeof(unsigned char) * width;
should be
texData.SysMemPitch = 4 * sizeof(unsigned char) * width;
because each pixels has DXGI_FORMAT_R8G8B8A8_UNORM format and occupies 4 bytes.

Related

Drawing is not showing when GDI compatible DC used from IDXGISurface1

I have created a texture which is GDI compatible but the DC I have got from it is used to draw lines from on point to another point which are not showing on the view window. Also no exception is thrown. Am I missing anything? Is there anyone done the same and successfully draw 2D shapes or something using GDI compatible DC ? Please help.
// get texture surface1 and overlay DC from GDI compatible texture 2D
D3D11_TEXTURE2D_DESC desc;
ZeroMemory(&desc, sizeof(desc));
desc.Width = width;
desc.Height = height;
desc.ArraySize = 1;
desc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
desc.Usage = D3D11_USAGE_DEFAULT;
desc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET;
desc.CPUAccessFlags = 0;
desc.MipLevels = 1;
desc.SampleDesc.Count = 1;
desc.MiscFlags = D3D11_RESOURCE_MISC_GDI_COMPATIBLE;
ID3D11Texture2DPtr texture2D;
IF_FAILED_THROW_HR(renderer->Device()->CreateTexture2D(&desc, nullptr, &texture2D));
// Create the shader resource view.
ID3D11ShaderResourceViewPtr shaderResourceView;
IF_FAILED_THROW_HR(device->CreateShaderResourceView(texture2D, nullptr, &shaderResourceView));
ID3D11ResourcePtr resource;
view->GetResource(&resource);
m_texture2D = resource;
m_dxgiSurface1 = m_texture2D;
TRY_CONDITION(m_dxgiSurface1);
HDC hdc{};
IF_FAILED_THROW_HR(m_dxgiSurface1->GetDC(FALSE, &hdc));
DXGI_SURFACE_DESC descOverlay = {0};
m_dxgiSurface1->GetDesc(&descOverlay);
// Draw on the DC using GDI
// fill the texture with the color key
::SetBkColor(overlayDC, m_keyColor);
const auto overlayRect = CRect{ 0, 0, gsl::narrow_cast<int>(descOverlay.Width), gsl::narrow_cast<int>(descOverlay.Height) };
::ExtTextOut(overlayDC, 0, 0, ETO_OPAQUE, overlayRect, nullptr, 0, nullptr);
m_dxgiSurface1->ReleaseDC(nullptr);
Update:
I have edited the above source code where I have created the shader resource view from the GDI compatible texture then took the texture back from resource to the surface1. Then surface1 provides a DC which is used for GDI Drawing. Now smooth rendering but no GDI drawing is visible.
The texture created after this GDI drawing is used for mixing with other textures. I was finding these drawings over those textures but later found my mistakes that this GDI drawing texture was not mixed with other textures in shader programs thus it was not rendered as an overlay. So it looked like the drawing was missing.

DirectX: Draw bitmap image scale up in viewport caused low quality?

I'm using DirectX to draw the images with RGB data in buffer. The fllowing is sumary code:
// create the vertex buffer
D3D11_BUFFER_DESC bd;
ZeroMemory(&bd, sizeof(bd));
bd.Usage = D3D11_USAGE_DYNAMIC; // write access access by CPU and GPU
bd.ByteWidth = sizeOfOurVertices; // size is the VERTEX struct * pW*pH
bd.BindFlags = D3D11_BIND_VERTEX_BUFFER; // use as a vertex buffer
bd.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE; // allow CPU to write in buffer
dev->CreateBuffer(&bd, NULL, &pVBuffer); // create the buffer
//Create Sample for texture
D3D11_SAMPLER_DESC desc;
desc.Filter = D3D11_FILTER_ANISOTROPIC;
desc.MaxAnisotropy = 16;
ID3D11SamplerState *ppSamplerState = NULL;
dev->CreateSamplerState(&desc, &ppSamplerState);
devcon->PSSetSamplers(0, 1, &ppSamplerState);
//Create list vertices from RGB data buffer
pW = bitmapSource->PixelWidth;
pH = bitmapSource->PixelHeight;
OurVertices = new VERTEX[pW*pH];
vIndex = 0;
unsigned char* curP = rgbPixelsBuff;
for (y = 0; y < pH; y++)
{
for (x = 0; x < pW; x++)
{
OurVertices[vIndex].Color.b = *curP++;
OurVertices[vIndex].Color.g = *curP++;
OurVertices[vIndex].Color.r = *curP++;
OurVertices[vIndex].Color.a = *curP++;
OurVertices[vIndex].X = x;
OurVertices[vIndex].Y = y;
OurVertices[vIndex].Z = 0.0f;
vIndex++;
}
}
sizeOfOurVertices = sizeof(VERTEX)* pW*pH;
// copy the vertices into the buffer
D3D11_MAPPED_SUBRESOURCE ms;
devcon->Map(pVBuffer, NULL, D3D11_MAP_WRITE_DISCARD, NULL, &ms); // map the buffer
memcpy(ms.pData, OurVertices, sizeOfOurVertices); // copy the data
devcon->Unmap(pVBuffer, NULL);
// unmap the buffer
// clear the back buffer to a deep blue
devcon->ClearRenderTargetView(backbuffer, D3DXCOLOR(0.0f, 0.2f, 0.4f, 1.0f));
// select which vertex buffer to display
UINT stride = sizeof(VERTEX);
UINT offset = 0;
devcon->IASetVertexBuffers(0, 1, &pVBuffer, &stride, &offset);
// select which primtive type we are using
devcon->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_POINTLIST);
// draw the vertex buffer to the back buffer
devcon->Draw(pW*pH, 0);
// switch the back buffer and the front buffer
swapchain->Present(0, 0);
When the viewport's size is smaller or equal the image's size => everything is ok. But when viewport's size is lager image's size => the image's quality is very bad.
I've searched and tried to use desc.Filter = D3D11_FILTER_ANISOTROPIC;as above code (I've tried to use D3D11_FILTER_MIN_POINT_MAG_MIP_LINEAR or D3D11_FILTER_MIN_LINEAR_MAG_MIP_POINTtoo), but the result is not better. The following images are result of displaying:
Someone can tell me the way to fix it.
Many thanks!
You are drawing each pixel as a point using DirectX. It is normal that when the screen size gets bigger, your points will move apart and the quality will be bad. You should draw a textured quad instead, using a texture that you fill with your RGB data and a pixel shader.

Wrong depth buffer (to texture) output?

For the SSAO effect I have to generate two textures: normals (in view space) and depth.
I decided to use depth buffer as texture according to Microsoft tutorial (the Reading the Depth-Stencil Buffer as a Texture chapter).
Unfortunately, after rendering I got none information from the depth buffer (the lower image):
I guess it's not right. And what is strange, the depth buffer seems to work (I get the right order of faces etc.).
The depth buffer code:
//create depth stencil texture (depth buffer)
D3D11_TEXTURE2D_DESC descDepth;
ZeroMemory(&descDepth, sizeof(descDepth));
descDepth.Width = width;
descDepth.Height = height;
descDepth.MipLevels = 1;
descDepth.ArraySize = 1;
descDepth.Format = DXGI_FORMAT_R24G8_TYPELESS;
descDepth.SampleDesc.Count = antiAliasing.getCount();
descDepth.SampleDesc.Quality = antiAliasing.getQuality();
descDepth.Usage = D3D11_USAGE_DEFAULT;
descDepth.BindFlags = D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE;
descDepth.CPUAccessFlags = 0;
descDepth.MiscFlags = 0;
ID3D11Texture2D* depthStencil = NULL;
result = device->CreateTexture2D(&descDepth, NULL, &depthStencil);
ERROR_HANDLE(SUCCEEDED(result), L"Could not create depth stencil texture.", MOD_GRAPHIC);
D3D11_SHADER_RESOURCE_VIEW_DESC shaderResourceViewDesc;
//setup the description of the shader resource view
shaderResourceViewDesc.Format = DXGI_FORMAT_R24_UNORM_X8_TYPELESS;
shaderResourceViewDesc.ViewDimension = antiAliasing.isOn() ? D3D11_SRV_DIMENSION_TEXTURE2DMS : D3D11_SRV_DIMENSION_TEXTURE2D;
shaderResourceViewDesc.Texture2D.MostDetailedMip = 0;
shaderResourceViewDesc.Texture2D.MipLevels = 1;
//create the shader resource view.
ERROR_HANDLE(SUCCEEDED(device->CreateShaderResourceView(depthStencil, &shaderResourceViewDesc, &depthStencilShaderResourceView)),
L"Could not create shader resource view for depth buffer.", MOD_GRAPHIC);
createDepthStencilStates();
//set the depth stencil state.
context->OMSetDepthStencilState(depthStencilState3D, 1);
D3D11_DEPTH_STENCIL_VIEW_DESC depthStencilViewDesc;
// Initialize the depth stencil view.
ZeroMemory(&depthStencilViewDesc, sizeof(depthStencilViewDesc));
// Set up the depth stencil view description.
depthStencilViewDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT;
depthStencilViewDesc.ViewDimension = antiAliasing.isOn() ? D3D11_DSV_DIMENSION_TEXTURE2DMS : D3D11_DSV_DIMENSION_TEXTURE2D;
depthStencilViewDesc.Texture2D.MipSlice = 0;
//depthStencilViewDesc.Flags = D3D11_DSV_READ_ONLY_DEPTH;
// Create the depth stencil view.
result = device->CreateDepthStencilView(depthStencil, &depthStencilViewDesc, &depthStencilView);
ERROR_HANDLE(SUCCEEDED(result), L"Could not create depth stencil view.", MOD_GRAPHIC);
After rendering with first pass, I set the depth stencil as texture resource along with other render targets (color, normals), appending it to array:
ID3D11ShaderResourceView ** textures = new ID3D11ShaderResourceView *[targets.size()+1];
for (unsigned i = 0; i < targets.size(); i++) {
textures[i] = targets[i]->getShaderResourceView();
}
textures[targets.size()] = depthStencilShaderResourceView;
context->PSSetShaderResources(0, targets.size()+1, textures);
Before second pass I call context->OMSetRenderTargets(1, &myRenderTargetView, NULL); to unbind depth buffer (so I can use it as texture).
Then, I render my textures (render targets from first pass + depth buffer) with trivial post-process shader, just for debug purpose (second pass):
Texture2D ColorTexture[3];
SamplerState ObjSamplerState;
float4 main(VS_OUTPUT input) : SV_TARGET0{
float4 Color;
Color = float4(0, 1, 1, 1);
float2 textureCoordinates = input.textureCoordinates.xy * 2;
if (input.textureCoordinates.x < 0.5f && input.textureCoordinates.y < 0.5f) {
Color = ColorTexture[0].Sample(ObjSamplerState, textureCoordinates);
}
if (input.textureCoordinates.x > 0.5f && input.textureCoordinates.y < 0.5f) {
textureCoordinates.x -= 0.5f;
Color = ColorTexture[1].Sample(ObjSamplerState, textureCoordinates);
}
if (input.textureCoordinates.x < 0.5f && input.textureCoordinates.y > 0.5f) { //depth texture
textureCoordinates.y -= 0.5f;
Color = ColorTexture[2].Sample(ObjSamplerState, textureCoordinates);
}
...
It works fine for normals texture. Why it doesn't for depth buffer (as shader resource view)?
As per comments:
The texture was rendered and sampled correctly but the data appeared to be uniformly red due to the data lying between 0.999 and 1.0f.
There are a few things you can do to improve the available depth precision, but the simplest of which is to simply ensure your near and far clip distances are not excessively small/large for the scene you're drawing.
Assuming metres are your unit, a near clip of 0.1 (10cm) and a far clip of 200 (metres) are much more reasonable than 1cm and 20km.
Even so, don't expect to see too many black/dark areas, the non linear nature of a z-buffer is still going to mean most of your depth values are shunted up towards 1. If visualisation of the depth buffer is important then simply rescale the data to the normalised 0-1 range before displaying it.

Rendering to texture - ClearRenderTargetView() works, but none objects are rendered to texture (rendering to screen works fine)

I try to render the scene to texture which should be then displayed in corner of the screen.
I though that I can do that this way:
Render the scene (my Engine::render() method that will set shaders and make draw calls) - works ok.
Change render target to the texture.
Render the scene again - does not work. The context->ClearRenderTargetView(texture->getRenderTargetView(), { 1.0f, 0.0f, 0.0f, 1.0f } ) does set my texture to red color (for scene in step 1. I use different color), but none objects are being rendered on it.
Change render target back to original.
Render the scene for the last time, with rectangle at corner that has the texture I've rendered in step 3. - works ok. I see the scene, the little rectangle in the corner too. The problem is, it's just red (something went wrong with rendering in step 3., I guess).
The result (there should be "image in image" instead of red rectangle):
The code for steps 2. - 4.:
context->OMSetRenderTargets(1, &textureRenderTargetView, depthStencilView);
float bg[4] = { 1.0f, 0.0f, 0.0f, 1.0f };
context->ClearRenderTargetView(textureRenderTargetView, bg); //backgroundColor - red, green, blue, alpha
render();
context->OMSetRenderTargets(1, &myRenderTargetView, depthStencilView); //bind render target back to previous value (not to texture)
The render() method does not change (it works in step 1., why it doesn't work when I render to texture?) and ends with swapChain->Present(0, 0).
I know that ClearRenderTargetView affects my texture (without it, it's doesn't change color to red). But the rest of rendering either do not output to it or there's another problem.
Did I miss something?
I create the texture, shader resource view and render target for it based on this tutorial (maybe there is an error in my D3D11_TEXTURE2D_DESC?):
D3D11_TEXTURE2D_DESC textureDesc;
D3D11_RENDER_TARGET_VIEW_DESC renderTargetViewDesc;
D3D11_SHADER_RESOURCE_VIEW_DESC shaderResourceViewDesc;
//1. create render target
ZeroMemory(&textureDesc, sizeof(textureDesc));
//setup the texture description
//we will need to have this texture bound as a render target AND a shader resource
textureDesc.Width = size.getX();
textureDesc.Height = size.getY();
textureDesc.MipLevels = 1;
textureDesc.ArraySize = 1;
textureDesc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT;
textureDesc.SampleDesc.Count = 1;
textureDesc.Usage = D3D11_USAGE_DEFAULT;
textureDesc.BindFlags = D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE;
textureDesc.CPUAccessFlags = 0;
textureDesc.MiscFlags = 0;
//create the texture
device->CreateTexture2D(&textureDesc, NULL, &textureRenderTarget);
//2. create render target view
//setup the description of the render target view.
renderTargetViewDesc.Format = textureDesc.Format;
renderTargetViewDesc.ViewDimension = D3D11_RTV_DIMENSION_TEXTURE2D;
renderTargetViewDesc.Texture2D.MipSlice = 0;
//create the render target view
device->CreateRenderTargetView(textureRenderTarget, &renderTargetViewDesc, &textureRenderTargetView);
//3. create shader resource view
//setup the description of the shader resource view.
shaderResourceViewDesc.Format = textureDesc.Format;
shaderResourceViewDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
shaderResourceViewDesc.Texture2D.MostDetailedMip = 0;
shaderResourceViewDesc.Texture2D.MipLevels = 1;
//create the shader resource view.
device->CreateShaderResourceView(textureRenderTarget, &shaderResourceViewDesc, &texture);
The depth buffer:
D3D11_TEXTURE2D_DESC descDepth;
ZeroMemory(&descDepth, sizeof(descDepth));
descDepth.Width = width;
descDepth.Height = height;
descDepth.MipLevels = 1;
descDepth.ArraySize = 1;
descDepth.Format = DXGI_FORMAT_D24_UNORM_S8_UINT;
descDepth.SampleDesc.Count = sampleCount;
descDepth.SampleDesc.Quality = maxQualityLevel;
descDepth.Usage = D3D11_USAGE_DEFAULT;
descDepth.BindFlags = D3D11_BIND_DEPTH_STENCIL;
descDepth.CPUAccessFlags = 0;
descDepth.MiscFlags = 0;
And here goes the swap chain:
DXGI_SWAP_CHAIN_DESC sd;
ZeroMemory(&sd, sizeof(sd));
sd.BufferCount = 1;
sd.BufferDesc.Width = width;
sd.BufferDesc.Height = height;
sd.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
sd.BufferDesc.RefreshRate.Numerator = numerator; //60
sd.BufferDesc.RefreshRate.Denominator = denominator; //1
sd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
sd.OutputWindow = *hwnd;
sd.SampleDesc.Count = sampleCount; //1 (and 0 for quality) to turn off multisampling
sd.SampleDesc.Quality = maxQualityLevel;
sd.Windowed = fullScreen ? FALSE : TRUE;
sd.Flags = DXGI_SWAP_CHAIN_FLAG_ALLOW_MODE_SWITCH; //allow full-screen switchin
// Set the scan line ordering and scaling to unspecified.
sd.BufferDesc.ScanlineOrdering = DXGI_MODE_SCANLINE_ORDER_UNSPECIFIED;
sd.BufferDesc.Scaling = DXGI_MODE_SCALING_UNSPECIFIED;
// Discard the back buffer contents after presenting.
sd.SwapEffect = DXGI_SWAP_EFFECT_DISCARD;
I create the default render target view that way:
//create a render target view
ID3D11Texture2D* pBackBuffer = NULL;
result = swapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), (LPVOID*)&pBackBuffer);
ERROR_HANDLE(SUCCEEDED(result), L"The swapChain->GetBuffer() failed.", MOD_GRAPHIC);
//Create the render target view with the back buffer pointer.
result = device->CreateRenderTargetView(pBackBuffer, NULL, &myRenderTargetView);
After some debugging, as #Gnietschow suggested, I have found an error:
D3D11 ERROR: ID3D11DeviceContext::OMSetRenderTargets:
The RenderTargetView at slot 0 is not compatable with the
DepthStencilView. DepthStencilViews may only be used with
RenderTargetViews if the effective dimensions of the Views are equal,
as well as the Resource types, multisample count, and multisample
quality.
The RenderTargetView at slot 0 has (w:1680,h:1050,as:1), while the
Resource is a Texture2D with (mc:1,mq:0).
The DepthStencilView has
(w:1680,h:1050,as:1), while the Resource is a Texture2D with
(mc:8,mq:16).
So basically, my render target (texture) was not using anti-aliasing while my back buffer/depth buffer do.
I had to change SampleDesc.Count to 1 and SampleDesc.Quality to 0 in both DXGI_SWAP_CHAIN_DESC and D3D11_TEXTURE2D_DESC to match the values from texture to which I render. In other words I had to turn off anti-aliasing when rendering to texture.
I wonder, why render to texture does not support anti-aliasing? When I set SampleDesc.Count and SampleDesc.Quality to my standard values (8 and 16, those works fine on my GPU when rendering the scene) for my texture render target, the device->CreateTexture2D(...) fails with "invalid parameter" (even when I use those same values everywhere).

Depth stencil not working - DirectX 10 C++

I have a DirectX10 + C++ problem.
Basically we're at the early stages of rendering, and for some reason our depth stencil seems to be failing to understand our model. Basically, here is everything we are doing:
Load shader, model and texture
Initialize DirectX
Draw
The model, shader and texture all load and work correctly, however (as shown in the screenshot below), the depth stencil is clearly not doing its job and the shader is being used in the wrong places. I have also included our initialization method in case you need it to figure it out. We believe we have tried almost everything but knowing our luck we have probably missed out 1 line of important code ^.^
We also saw that someone else had the same problem, however their fix didn't work (their problem was that they had set the near clipping plane to 0.0, however ours is not 0.0 so that is not the problem)
Thanks in advance!
Problem screenshot
void GraphicsDeviceDirectX::InitGraphicsDevice(HWND hWnd)
{
DXGI_SWAP_CHAIN_DESC scd; // create a struct to hold various swap chain information
ZeroMemory(&scd, sizeof(DXGI_SWAP_CHAIN_DESC)); // clear out the struct for use
scd.BufferCount = 2; // create two buffers, one for the front, one for the back
scd.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; // use 32-bit color
scd.BufferDesc.Height = 600;
scd.BufferDesc.Width = 600;
scd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; // tell how the chain is to be used
scd.OutputWindow = hWnd; // set the window to be used by Direct3D
scd.SampleDesc.Count = 1; // set the level of multi-sampling
scd.SampleDesc.Quality = 0; // set the quality of multi-sampling
scd.Windowed = true; // set to windowed or full-screen mode
//set scan line ordering and scaling
scd.BufferDesc.ScanlineOrdering = DXGI_MODE_SCANLINE_ORDER_UNSPECIFIED;
scd.BufferDesc.Scaling = DXGI_MODE_SCALING_UNSPECIFIED;
//discard back buffer dontents
scd.SwapEffect = DXGI_SWAP_EFFECT_DISCARD;
//dont set advanced flags
scd.Flags = 0;
// create a device class and swap chain class using the information in the scd struct
if(FAILED(D3D10CreateDeviceAndSwapChain(NULL,
D3D10_DRIVER_TYPE_HARDWARE,
NULL,
D3D10_CREATE_DEVICE_DEBUG,
D3D10_SDK_VERSION,
&scd,
&swapchain,
&device)))
{
throw EngineException("Error creating graphics device");
}
//Push graphics device to Persistant Object Manager
//PerObjMan::Push(device);
//Push swapchain to Peristant Object Manager
PerObjMan::Push(swapchain);
// get the address of the back buffer and use it to create the render target
ID3D10Texture2D* pBackBuffer;
swapchain->GetBuffer(0, __uuidof(ID3D10Texture2D), (LPVOID*)&pBackBuffer);
device->CreateRenderTargetView(pBackBuffer, NULL, &rtv);
/*D3D10_TEXTURE2D_DESC descBack;
pBackBuffer->GetDesc(&descBack);*/
pBackBuffer->Release();
pBackBuffer = NULL;
//Push graphics device to Persistant Object Manager
PerObjMan::Push(rtv);
ID3D10Texture2D* pDepthStencil = NULL;
D3D10_TEXTURE2D_DESC descDepth;
ZeroMemory(&descDepth, sizeof(descDepth));
descDepth.Width = 600;
descDepth.Height = 600;
descDepth.MipLevels = 1;
descDepth.ArraySize = 1;
descDepth.Format = DXGI_FORMAT_D24_UNORM_S8_UINT;
descDepth.SampleDesc.Count = 1;
descDepth.SampleDesc.Quality = 0;
descDepth.Usage = D3D10_USAGE_DEFAULT;
descDepth.BindFlags = D3D10_BIND_DEPTH_STENCIL;
descDepth.CPUAccessFlags = 0;
descDepth.MiscFlags = 0;
HRESULT hr;
hr = GetGraphicsDevice()->CreateTexture2D( &descDepth, NULL, &pDepthStencil );
if(FAILED(hr))
throw EngineException("FAIL");
PerObjMan::Push(pDepthStencil);
D3D10_DEPTH_STENCIL_DESC dsDesc;
ZeroMemory(&dsDesc, sizeof(dsDesc));
// Depth test parameters
dsDesc.DepthEnable = true;
dsDesc.DepthWriteMask = D3D10_DEPTH_WRITE_MASK::D3D10_DEPTH_WRITE_MASK_ALL;
dsDesc.DepthFunc = D3D10_COMPARISON_FUNC::D3D10_COMPARISON_LESS;
// Stencil test parameters
dsDesc.StencilEnable = false;
dsDesc.StencilReadMask = 0xFF;
dsDesc.StencilWriteMask = 0xFF;
// Stencil operations if pixel is front-facing.
dsDesc.FrontFace.StencilFailOp = D3D10_STENCIL_OP_KEEP;
dsDesc.FrontFace.StencilDepthFailOp = D3D10_STENCIL_OP_INCR;
dsDesc.FrontFace.StencilPassOp = D3D10_STENCIL_OP_KEEP;
dsDesc.FrontFace.StencilFunc = D3D10_COMPARISON_ALWAYS;
// Stencil operations if pixel is back-facing.
dsDesc.BackFace.StencilFailOp = D3D10_STENCIL_OP_KEEP;
dsDesc.BackFace.StencilDepthFailOp = D3D10_STENCIL_OP_DECR;
dsDesc.BackFace.StencilPassOp = D3D10_STENCIL_OP_KEEP;
dsDesc.BackFace.StencilFunc = D3D10_COMPARISON_ALWAYS;
// Create depth stencil state
hr = device->CreateDepthStencilState(&dsDesc, &dss);
if(FAILED(hr))
throw EngineException("FAIL");
// Bind depth stencil state
device->OMSetDepthStencilState(dss, 1);
PerObjMan::Push(dss);
D3D10_DEPTH_STENCIL_VIEW_DESC descDSV;
ZeroMemory(&descDSV, sizeof(descDSV));
descDSV.Format = descDepth.Format;
descDSV.ViewDimension = D3D10_DSV_DIMENSION::D3D10_DSV_DIMENSION_TEXTURE2D;
descDSV.Texture2D.MipSlice = 0;
// Create the depth stencil view
hr = device->CreateDepthStencilView( pDepthStencil, // Depth stencil texture
&descDSV, // Depth stencil desc
&dsv ); // [out] Depth stencil view
if(FAILED(hr))
throw EngineException("FAIL");
PerObjMan::Push(dsv);
// Bind the depth stencil view
device->OMSetRenderTargets( 1, // One rendertarget view
&rtv, // Render target view, created earlier
dsv); // Depth stencil view for the render target
D3D10_VIEWPORT viewport; // create a struct to hold the viewport data
ZeroMemory(&viewport, sizeof(D3D10_VIEWPORT)); // clear out the struct for use
GameToImplement::GameInfo::Info info = GameToImplement::GameInfo::GetGameInfo();
RECT rect;
int width = 0;
int height = 0;
if(GetClientRect(hWnd, &rect))
{
width = rect.right - rect.left;
height = rect.bottom - rect.top;
}
else
{
throw EngineException("");
}
viewport.TopLeftX = 0; // set the left to 0
viewport.TopLeftY = 0; // set the top to 0
viewport.Width = 600; // set the width to the window's width
viewport.Height = 600; // set the height to the window's height
viewport.MinDepth = 0.0f;
viewport.MaxDepth = 1.0f;
device->RSSetViewports(1, &viewport); // set the viewport
}
I fixed it, thanks to catflier's nod in the right direction. Turns out I was actually releasing the rasterizer state too early for the depth stencil to be used.
I'll leave this answer here for anyone who has the same problem.