Failed to create depth stencil in DirectX11 - c++

i have the following code:
D3D11_TEXTURE2D_DESC descDepth;
memset(&descDepth, 0, sizeof(descDepth));
descDepth.Width = width;
descDepth.Height = height;
descDepth.MipLevels = 1;
descDepth.ArraySize = 1;
descDepth.Format = DXGI_FORMAT_D24_UNORM_S8_UINT;
descDepth.SampleDesc.Count = 1;
descDepth.SampleDesc.Quality = 0;
descDepth.Usage = D3D11_USAGE_DEFAULT;
descDepth.BindFlags = D3D11_BIND_DEPTH_STENCIL;
descDepth.CPUAccessFlags = 0;
descDepth.MiscFlags = 0;
hr = g_d3dDevice->CreateTexture2D(&descDepth, nullptr, &g_depthStencil);
and invalid argument error. I really don't know what's the problem. I used this code before and everything worked fine.

If you turned the debug layer on at the device creation, you should find in the log that DXGI_FORMAT_D24_UNORM_S8_UINT is not a texture format, You need to use DXGI_FORMAT_R24_UNORM_X8_TYPELESS for the texture and the other one for the depth stencil view.

Related

Texture readback showing all 0s for DirectX 11

I am new to working with textures and DirectX and having issues reading back texture data from the GPU.
I am interested in reading back only a specific subset of my source texture. Also, I am trying to read it back at the least detailed miplevel (1x1 texture). Steps I follow:
Copy subregion of source texture into new texture
D3D11_TEXTURE2D_DESC desc = {0};
desc.Width = 1;
desc.Height = 1;
desc.MipLevels = 0;
desc.ArraySize = 1;
desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
desc.SampleDesc.Count = 1;
desc.SampleDesc.Quality = 0;
desc.Usage = D3D11_USAGE_DEFAULT;
desc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET;
desc.CPUAccessFlags = 0;
desc.MiscFlags = D3D11_RESOURCE_MISC_GENERATE_MIPS;
hr = pD3D11Device->CreateTexture2D(&desc, nullptr, &pSrcTexture);
D3D11_BOX srcRegion = {0};
srcRegion.left = 1000;
srcRegion.right = 1250;
srcRegion.top = 500;
srcRegion.bottom = 750;
srcRegion.front = 0;
srcRegion.back = 1;
pD3D11DeviceContext->CopySubresourceRegion(pSrcTexture, 0, 0, 0, 0, srcResource, 0, &srcRegion);
2. Create shader resource view and generate mipmaps for newly created texture
D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc = {0};
srvDesc.Format = desc.Format;
srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
srvDesc.Texture2D.MipLevels = -1;
srvDesc.Texture2D.MostDetailedMip = 0;
ID3D11ShaderResourceView* pShaderResourceView = nullptr;
hr = pD3D11Device->CreateShaderResourceView(pSrcTexture, &srvDesc, &pShaderResourceView);
pD3D11DeviceContext->GenerateMips(pShaderResourceView);
3. Copy into staging texture to be read back by CPU
D3D11_TEXTURE2D_DESC desc2 = {0};
desc2.Width = 1;
desc2.Height = 1;
desc2.MipLevels = 1;
desc2.ArraySize = 1;
desc2.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
desc2.SampleDesc.Count = 1;
desc2.SampleDesc.Quality = 0;
desc2.Usage = D3D11_USAGE_STAGING;
desc2.BindFlags = 0;
desc2.CPUAccessFlags = D3D11_CPU_ACCESS_READ;
desc2.MiscFlags = 0;
ID3D11Texture2D* pStagingTexture = nullptr;
hr = pD3D11Device->CreateTexture2D(&desc2, nullptr, &pStagingTexture);
pD3D11DeviceContext->CopyResource(pStagingTexture, pSrcTexture);
4. Map the subresource to access the underlying data, unmapping when finished
D3D11_MAPPED_SUBRESOURCE mappedResource = {0};
hr = pD3D11DeviceContext->Map(pStagingTexture, 0, D3D11_MAP_READ, 0, &mappedResource);
FLOAT* pTexels = (FLOAT*)mappedResource.pData;
std::cout << pTextels[0] << pTextels[1] << pTextels[2] << pTextels[3] << std::endl; // all zeros here
pD3D11DeviceContext->Unmap(pStagingTexture, 0);
Please note that none of my hr results are failing. Why is my texture data showing as all zeros?
Any guidance on how to resolve?
CopyResource, CopySubresourceRegion, and GenerateMips do not return a HRESULT, but it may have failed in any of those functions. A good way to determine that is to enable the Direct3D Debug Device to look for debug output. See this blog post and Microsoft Docs.
I suspect the problem is that you when called GenerateMips it didn't do anything because you provided a 1x1 texture as the starting place so it doesn't have any mips. I also don't see how you set up srcResource, but you are trying to copy using CopySubresourceRegion from a 250x250 texture region to a 1x1 texture which is going to fail as well.
You should take a look at DirectXTK and the DDSTextureLoader / WICTextureLoader modules in particular which implement auto-mip generation, and ScreenGrab which does read-back.
One minor note: = {0}; was a way to zero-fill structs back in VS 2013 or earlier. With C++11 conformant compilers (VS 2015 or later), just use = {}; as that does the zero-fill.

DX11 CreateDepthStencil View - nullptr

I am currently working on a school project using DirectX11 and have started making the back buffer here. I believe I have followed everything correctly but, when creating my DepthStencilView, I am getting Null as return; I'm probably doing somthing stupid but I cant seem to work it out.
Here are all the relevant code snippets.
//Depth and Stencil Buffer
D3D11_TEXTURE2D_DESC depthStencilDesc;
depthStencilDesc.Width = _WindowWidth;
depthStencilDesc.Height = _WindowHeight;
depthStencilDesc.MipLevels = 1;
depthStencilDesc.ArraySize = 1;
depthStencilDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT;
depthStencilDesc.SampleDesc.Count = 1;
depthStencilDesc.SampleDesc.Quality = 0;
depthStencilDesc.Usage = D3D11_USAGE_DEFAULT;
depthStencilDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL;
depthStencilDesc.CPUAccessFlags = 0;
depthStencilDesc.MiscFlags = 0;
_pd3dDevice->CreateTexture2D(&depthStencilDesc, nullptr, &_depthStencilBuffer);
_pd3dDevice->CreateDepthStencilView(_depthStencilBuffer, nullptr, &_depthStencilView);
_pImmediateContext->OMSetRenderTargets(1, &_pRenderTargetView, _depthStencilView);
In my cleanup function:
if (_depthStencilView)_depthStencilView->Release();
if (_depthStencilBuffer) _depthStencilBuffer->Release();
In my draw function:
//Clear depth/stencil
_pImmediateContext->ClearDepthStencilView(_depthStencilView, D3D11_CLEAR_STENCIL, 1.0f, 0);
I worked out how to fit it, was just the order moved things around it worked perfectly.

Draw Text with FreeType (DirectX 11)

So I would like to draw text with FreeType (on DirectX 11). The problem is that I don't really understand how to create the my_draw_bitmap function (I follow this tutorial).
I get all the glyphs then convert them to bitmap, but I don't see how to convert an FT_Bitmap* to a ID3D11Texture2D* (this would - in theory - allow me to render a text)
Here is my code :
FontLoader.cpp
void FontLoader::RenderText(ID3D11Device* p_device, Text* p_text, Math::Vec2 p_position)
{
const char* text = p_text->GetText();
FT_Face face = p_text->GetFont()->GetFace();
FT_GlyphSlot slot = face->glyph;
Math::Vec2 pen = p_position;
for (unsigned int i = 0; i < strlen(text); ++i)
{
if (FT_Load_Char(face, text[i], FT_LOAD_RENDER))
continue;
// draw to our target surface
CreateTextureFromBitmap(p_device, &slot->bitmap, Math::Vec2((float)slot->bitmap_left, (float)slot->bitmap_top));
// Increment pen position
pen._x += slot->advance.x >> 6;
}
}
ID3D11Texture2D* FontLoader::CreateTextureFromBitmap(ID3D11Device* p_device, FT_Bitmap* p_bitmap, Math::Vec2 p_position)
{
D3D11_TEXTURE2D_DESC textureDesc;
textureDesc.Width = p_bitmap->width;
textureDesc.Height = p_bitmap->pitch;
textureDesc.MipLevels = textureDesc.ArraySize = 1;
textureDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
textureDesc.SampleDesc.Count = 1;
textureDesc.Usage = D3D11_USAGE_DYNAMIC;
textureDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
textureDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
textureDesc.MiscFlags = 0;
ID3D11Texture2D *texture2D = NULL;
// don't know how and when to use p_bitmap
p_device->CreateTexture2D(&textureDesc, NULL, &texture2D);
return texture2D;
}
Thanks a lot !

Direct3D11: Sharing a texture between devices: black texture

I have two D3D11 devices, each with its own context but on the same adapter.
I am trying to share a texture beween the two, but the texture I recieve on the other side is always black.
HRESULT hr;
// Make a shared texture on device_A / context_A
D3D11_TEXTURE2D_DESC desc;
ZeroMemory(&desc, sizeof(desc));
desc.Width = 1024;
desc.Height = 1024;
desc.MipLevels = 1;
desc.ArraySize = 1;
desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
desc.SampleDesc.Count = 1;
desc.SampleDesc.Quality = 0;
desc.Usage = D3D11_USAGE_DEFAULT;
desc.CPUAccessFlags = 0;
desc.MiscFlags = D3D11_RESOURCE_MISC_SHARED;
desc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET;
ID3D11Texture2D* copy_tex;
hr = device_A->CreateTexture2D(&desc, NULL, &copy_tex);
// Test the texture by filling it with some color
D3D11_RENDER_TARGET_VIEW_DESC rtvd = {};
rtvd.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
rtvd.ViewDimension = D3D11_RTV_DIMENSION_TEXTURE2D;
rtvd.Texture2D.MipSlice = 0;
ID3D11RenderTargetView* copy_tex_view = 0;
hr = device_A->CreateRenderTargetView(copy_tex, &rtvd, &copy_tex_view);
FLOAT clear_color[4] = {1, 0, 0, 1};
context_A->ClearRenderTargetView(copy_tex_view, clear_color);
// Now try to share it to device_B:
IDXGIResource* copy_tex_resource = 0;
hr = copy_tex->QueryInterface( __uuidof(IDXGIResource), (void**)&copy_tex_resource );
HANDLE copy_tex_shared_handle = 0;
hr = copy_tex_resource->GetSharedHandle(&copy_tex_shared_handle);
IDXGIResource* copy_tex_resource_mirror = 0;
hr = device_B->OpenSharedResource(copy_tex_shared_handle, __uuidof(ID3D11Texture2D), (void**)&copy_tex_resource_mirror);
ID3D11Texture2D* copy_tex_mirror = 0;
hr = copy_tex_resource_mirror->QueryInterface(__uuidof(ID3D11Texture2D), (void**)(&copy_tex_mirror));
However: the copy_tex_mirror texture is always black.
I don't get any HRESULT error codes, and can even use copy_tex_mirror on device_B / context_B normally, but I can't get the pixel data that I put into it on device_A.
Am I missing something?
Thanks in advance!
How do you know that the texture is always black? :-)
GPU operations are queued up by Direct3D, so when you open the shared resource on device_B, the ClearRenderTargetView() on device_A might not have been carried out yet. According to the MSDN library documentation on ID3D11Device::OpenSharedResource Method:
If a shared texture is updated on one device ID3D11DeviceContext::Flush must be called on that device.
We had a lot of issues such as this when we implemented shared textures between devices at work. If you add D3D9 or OpenGL to the mix, the pitfalls multiply..

I am trying to set a chess board texture for a square

I am trying to set a chess board texture with green and blue texels for a square.
Instead my square is black.
I don't get any compile or runtime errors.
Is there any problem with my texture code?
vector<unsigned char> texelsBuffer;
for(int i = 0; i < N; ++i)
for(int j = 0; j < N; ++j)
{
texelsBuffer.push_back(0.0f);
if((i+j) % 2 == 0) {
texelsBuffer.push_back(1.0f);
texelsBuffer.push_back(0.0f);
}
else {
texelsBuffer.push_back(0.0f);
texelsBuffer.push_back(1.0f);
}
texelsBuffer.push_back(1.0f);
}
ID3D11Texture2D* tex = 0;
D3D11_TEXTURE2D_DESC textureDescr;
ZeroMemory(&textureDescr, sizeof(D3D11_TEXTURE2D_DESC));
textureDescr.ArraySize = 1;
textureDescr.BindFlags = D3D11_BIND_SHADER_RESOURCE;
textureDescr.CPUAccessFlags = 0;
textureDescr.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
textureDescr.Height = N;
textureDescr.Width = N;
textureDescr.MipLevels = 1;
textureDescr.MiscFlags = 0;
textureDescr.SampleDesc.Count = 1;
textureDescr.SampleDesc.Quality = 0;
textureDescr.Usage = D3D11_USAGE_IMMUTABLE;
D3D11_SUBRESOURCE_DATA texInitData;
ZeroMemory(&texInitData, sizeof(D3D11_SUBRESOURCE_DATA));
texInitData.pSysMem = (void*)&texelsBuffer[0];
texInitData.SysMemPitch = sizeof(unsigned char) * N;
texInitData.SysMemSlicePitch = 0;
g_pd3dDevice->CreateTexture2D(&textureDescr, &texInitData, &tex);
g_pd3dDevice->CreateShaderResourceView(tex, NULL, &g_pTextureRV);
Or my sampler code?
D3D11_SAMPLER_DESC sampDesc;
ZeroMemory( &sampDesc, sizeof(sampDesc) );
sampDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR;
sampDesc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP;
sampDesc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP;
sampDesc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP;
sampDesc.ComparisonFunc = D3D11_COMPARISON_NEVER;
sampDesc.MinLOD = 0;
sampDesc.MaxLOD = D3D11_FLOAT32_MAX;
g_pd3dDevice->CreateSamplerState(&sampDesc, &g_pSamplerLinear);
You're populating texelsBuffer with floating-point values. Use 255 or 0xFF to set the channel to 1.0 in UNORM encoding.
You should also consider making your chess squares larger than one texel; unless you are always going to be rendering the texture at a 1:1 pixel:texel ratio, you will end up blending between the two colors for most sample locations.