Direct3D: Creating a render target view from a texture - c++

I would like to create a render target view from a texture, for use in multiple target rendering. I am currently able to create a render target view for the back buffer - all of that works nicely. Further, I am able to create the texture. However, when I try to build the view from it, I get an error.
First, here's the code:
D3D11_TEXTURE2D_DESC textureDesc;
ZeroMemory(&textureDesc, sizeof(textureDesc));
textureDesc.ArraySize = 1;
textureDesc.BindFlags = D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE;
textureDesc.CPUAccessFlags = 0;
textureDesc.Format = DXGI_FORMAT_R32_FLOAT;
textureDesc.Height = m_renderTargetSize.Height;
textureDesc.Width = m_renderTargetSize.Width;
textureDesc.MipLevels = 1;
textureDesc.MiscFlags = 0;
textureDesc.SampleDesc.Count = 1;
textureDesc.SampleDesc.Quality = 0;
textureDesc.Usage = D3D11_USAGE_DEFAULT;
ComPtr<ID3D11Texture2D> texture;
DX::ThrowIfFailed(
m_d3dDevice->CreateTexture2D(
&textureDesc,
nullptr,
&texture
)
);
D3D11_RENDER_TARGET_VIEW_DESC renderTargetViewDescription;
ZeroMemory(&renderTargetViewDescription, sizeof(renderTargetViewDescription));
renderTargetViewDescription.Format = textureDesc.Format;
DX::ThrowIfFailed(
m_d3dDevice->CreateRenderTargetView(
texture,
&renderTargetViewDescription,
&m_renderTargetView[1]
)
);
I am getting the following error from the compiler on the line with the call to CreateRenderTargetView:
Error: no suitable conversion function from
"Microsoft::WRL::ComPtr" to "ID3D11Resource *"
exists.
According to MSDN, ID3D11Texture2D inherits from ID3D11Resource. Do I have to somehow upcast first?
I am working in DirectX 11, and compiling with vc110.

It seems that WRL's ComPtr doesn't support implicit conversion to T* (unlike ATL's CComPtr), so you need to use the Get method:
DX::ThrowIfFailed(
m_d3dDevice->CreateRenderTargetView(
texture.Get(),
&renderTargetViewDescription,
&m_renderTargetView[1]
)
);

Related

Texture readback showing all 0s for DirectX 11

I am new to working with textures and DirectX and having issues reading back texture data from the GPU.
I am interested in reading back only a specific subset of my source texture. Also, I am trying to read it back at the least detailed miplevel (1x1 texture). Steps I follow:
Copy subregion of source texture into new texture
D3D11_TEXTURE2D_DESC desc = {0};
desc.Width = 1;
desc.Height = 1;
desc.MipLevels = 0;
desc.ArraySize = 1;
desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
desc.SampleDesc.Count = 1;
desc.SampleDesc.Quality = 0;
desc.Usage = D3D11_USAGE_DEFAULT;
desc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET;
desc.CPUAccessFlags = 0;
desc.MiscFlags = D3D11_RESOURCE_MISC_GENERATE_MIPS;
hr = pD3D11Device->CreateTexture2D(&desc, nullptr, &pSrcTexture);
D3D11_BOX srcRegion = {0};
srcRegion.left = 1000;
srcRegion.right = 1250;
srcRegion.top = 500;
srcRegion.bottom = 750;
srcRegion.front = 0;
srcRegion.back = 1;
pD3D11DeviceContext->CopySubresourceRegion(pSrcTexture, 0, 0, 0, 0, srcResource, 0, &srcRegion);
2. Create shader resource view and generate mipmaps for newly created texture
D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc = {0};
srvDesc.Format = desc.Format;
srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
srvDesc.Texture2D.MipLevels = -1;
srvDesc.Texture2D.MostDetailedMip = 0;
ID3D11ShaderResourceView* pShaderResourceView = nullptr;
hr = pD3D11Device->CreateShaderResourceView(pSrcTexture, &srvDesc, &pShaderResourceView);
pD3D11DeviceContext->GenerateMips(pShaderResourceView);
3. Copy into staging texture to be read back by CPU
D3D11_TEXTURE2D_DESC desc2 = {0};
desc2.Width = 1;
desc2.Height = 1;
desc2.MipLevels = 1;
desc2.ArraySize = 1;
desc2.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
desc2.SampleDesc.Count = 1;
desc2.SampleDesc.Quality = 0;
desc2.Usage = D3D11_USAGE_STAGING;
desc2.BindFlags = 0;
desc2.CPUAccessFlags = D3D11_CPU_ACCESS_READ;
desc2.MiscFlags = 0;
ID3D11Texture2D* pStagingTexture = nullptr;
hr = pD3D11Device->CreateTexture2D(&desc2, nullptr, &pStagingTexture);
pD3D11DeviceContext->CopyResource(pStagingTexture, pSrcTexture);
4. Map the subresource to access the underlying data, unmapping when finished
D3D11_MAPPED_SUBRESOURCE mappedResource = {0};
hr = pD3D11DeviceContext->Map(pStagingTexture, 0, D3D11_MAP_READ, 0, &mappedResource);
FLOAT* pTexels = (FLOAT*)mappedResource.pData;
std::cout << pTextels[0] << pTextels[1] << pTextels[2] << pTextels[3] << std::endl; // all zeros here
pD3D11DeviceContext->Unmap(pStagingTexture, 0);
Please note that none of my hr results are failing. Why is my texture data showing as all zeros?
Any guidance on how to resolve?
CopyResource, CopySubresourceRegion, and GenerateMips do not return a HRESULT, but it may have failed in any of those functions. A good way to determine that is to enable the Direct3D Debug Device to look for debug output. See this blog post and Microsoft Docs.
I suspect the problem is that you when called GenerateMips it didn't do anything because you provided a 1x1 texture as the starting place so it doesn't have any mips. I also don't see how you set up srcResource, but you are trying to copy using CopySubresourceRegion from a 250x250 texture region to a 1x1 texture which is going to fail as well.
You should take a look at DirectXTK and the DDSTextureLoader / WICTextureLoader modules in particular which implement auto-mip generation, and ScreenGrab which does read-back.
One minor note: = {0}; was a way to zero-fill structs back in VS 2013 or earlier. With C++11 conformant compilers (VS 2015 or later), just use = {}; as that does the zero-fill.

Direct3D11: Sharing a texture between devices: black texture

I have two D3D11 devices, each with its own context but on the same adapter.
I am trying to share a texture beween the two, but the texture I recieve on the other side is always black.
HRESULT hr;
// Make a shared texture on device_A / context_A
D3D11_TEXTURE2D_DESC desc;
ZeroMemory(&desc, sizeof(desc));
desc.Width = 1024;
desc.Height = 1024;
desc.MipLevels = 1;
desc.ArraySize = 1;
desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
desc.SampleDesc.Count = 1;
desc.SampleDesc.Quality = 0;
desc.Usage = D3D11_USAGE_DEFAULT;
desc.CPUAccessFlags = 0;
desc.MiscFlags = D3D11_RESOURCE_MISC_SHARED;
desc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET;
ID3D11Texture2D* copy_tex;
hr = device_A->CreateTexture2D(&desc, NULL, &copy_tex);
// Test the texture by filling it with some color
D3D11_RENDER_TARGET_VIEW_DESC rtvd = {};
rtvd.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
rtvd.ViewDimension = D3D11_RTV_DIMENSION_TEXTURE2D;
rtvd.Texture2D.MipSlice = 0;
ID3D11RenderTargetView* copy_tex_view = 0;
hr = device_A->CreateRenderTargetView(copy_tex, &rtvd, &copy_tex_view);
FLOAT clear_color[4] = {1, 0, 0, 1};
context_A->ClearRenderTargetView(copy_tex_view, clear_color);
// Now try to share it to device_B:
IDXGIResource* copy_tex_resource = 0;
hr = copy_tex->QueryInterface( __uuidof(IDXGIResource), (void**)&copy_tex_resource );
HANDLE copy_tex_shared_handle = 0;
hr = copy_tex_resource->GetSharedHandle(&copy_tex_shared_handle);
IDXGIResource* copy_tex_resource_mirror = 0;
hr = device_B->OpenSharedResource(copy_tex_shared_handle, __uuidof(ID3D11Texture2D), (void**)&copy_tex_resource_mirror);
ID3D11Texture2D* copy_tex_mirror = 0;
hr = copy_tex_resource_mirror->QueryInterface(__uuidof(ID3D11Texture2D), (void**)(&copy_tex_mirror));
However: the copy_tex_mirror texture is always black.
I don't get any HRESULT error codes, and can even use copy_tex_mirror on device_B / context_B normally, but I can't get the pixel data that I put into it on device_A.
Am I missing something?
Thanks in advance!
How do you know that the texture is always black? :-)
GPU operations are queued up by Direct3D, so when you open the shared resource on device_B, the ClearRenderTargetView() on device_A might not have been carried out yet. According to the MSDN library documentation on ID3D11Device::OpenSharedResource Method:
If a shared texture is updated on one device ID3D11DeviceContext::Flush must be called on that device.
We had a lot of issues such as this when we implemented shared textures between devices at work. If you add D3D9 or OpenGL to the mix, the pitfalls multiply..

Depth buffer as texture - "D3D11 ERROR: The Format is invalid when creating a View"

I am trying to use depth buffer as an texture for second pass in my shader.
According to official documentation ("Reading the Depth-Stencil Buffer as a Texture" paragraph), I've set D3D11_TEXTURE2D_DESC.Format to DXGI_FORMAT_R24G8_TYPELESS (as it was DXGI_FORMAT_D24_UNORM_S8_UINT earlier, when I was not using depth buffer as texture):
D3D11_TEXTURE2D_DESC descDepth;
ZeroMemory(&descDepth, sizeof(descDepth));
descDepth.Width = width;
descDepth.Height = height;
descDepth.MipLevels = 1;
descDepth.ArraySize = 1;
descDepth.Format = DXGI_FORMAT_R24G8_TYPELESS; //normally it was DXGI_FORMAT_D24_UNORM_S8_UINT
descDepth.SampleDesc.Count = antiAliasing.getCount();
descDepth.SampleDesc.Quality = antiAliasing.getQuality();
descDepth.Usage = D3D11_USAGE_DEFAULT;
descDepth.BindFlags = D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE;
descDepth.CPUAccessFlags = 0;
descDepth.MiscFlags = 0;
ID3D11Texture2D* depthStencil = NULL;
result = device->CreateTexture2D(&descDepth, NULL, &depthStencil);
The results succeeded. Then I tried to create shader resource view for my buffer:
D3D11_SHADER_RESOURCE_VIEW_DESC shaderResourceViewDesc;
//setup the description of the shader resource view
shaderResourceViewDesc.Format = DXGI_FORMAT_R32_FLOAT;
shaderResourceViewDesc.ViewDimension = antiAliasing.isOn() ? D3D11_SRV_DIMENSION_TEXTURE2DMS : D3D11_SRV_DIMENSION_TEXTURE2D;
shaderResourceViewDesc.Texture2D.MostDetailedMip = 0;
shaderResourceViewDesc.Texture2D.MipLevels = 1;
//create the shader resource view.
device->CreateShaderResourceView(depthStencil, &shaderResourceViewDesc, &depthStencilShaderResourceView);
Unfortunately, that generates an error:
D3D11 ERROR: ID3D11Device::CreateShaderResourceView: The Format (0x29,
R32_FLOAT) is invalid, when creating a View; it is not a fully
qualified Format castable from the Format of the Resource (0x2c,
R24G8_TYPELESS). [ STATE_CREATION ERROR #127:
CREATESHADERRESOURCEVIEW_INVALIDFORMAT]
D3D11 ERROR: ID3D11Device::CreateShaderResourceView: Returning E_INVALIDARG,
meaning invalid parameters were passed. [ STATE_CREATION ERROR #131:
CREATESHADERRESOURCEVIEW_INVALIDARG_RETURN]
I was also trying with DXGI_FORMAT_R24G8_TYPELESS, DXGI_FORMAT_D24_UNORM_S8_UINT and DXGI_FORMAT_R8G8B8A8_UNORM as shaderResourceViewDesc.Format.
Where is the problem? I was following the documentation and I do not see it.
The formats DXGI_FORMAT_R24G8_TYPELESS and DXGI_FORMAT_R32_FLOAT are not 'cast compatible'. That typeless format is only compatible with DXGI_FORMAT_D24_UNORM_S8_UINT, DXGI_FORMAT_R24_UNORM_X8_TYPELESS, and DXGI_FORMAT_X24_TYPELESS_G8_UINT
DXGI_FORMAT_R32_TYPELESS is compatible with both DXGI_FORMAT_D32_FLOAT and DXGI_FORMAT_R32_FLOAT

Unable to read depth buffer from Compute shader

I am unable to read depth buffer from compute shader.
I am using this in my hlsl code.
Texture2D<float4> gDepthTextures : register(t3);
// tried this.
//Texture2D<float> gDepthTextures : register(t3);
// and this.
//Texture2D<uint> gDepthTextures : register(t3);
// and this.
//Texture2D<uint4> gDepthTextures : register(t3);
And doing this to read the buffer.
outputTexture[dispatchThreadId.xy]=gDepthTextures.Load(int3(dispatchThreadId.xy,0));
And I am detaching depth buffer from render target.
ID3D11RenderTargetView *nullView[3]={NULL,NULL,NULL};
g_pImmediateContext->OMSetRenderTargets( 3, nullView, NULL );
Still I am getting this error in output.
*D3D11 ERROR: ID3D11DeviceContext::Dispatch: The Shader Resource View dimension declared in the shader code (TEXTURE2D) does not match the view type bound to slot 3 of the Compute Shader unit (BUFFER). This mismatch is invalid if the shader actually uses the view (e.g. it is not skipped due to shader code branching). [ EXECUTION ERROR #354: DEVICE_DRAW_VIEW_DIMENSION_MISMATCH]*
This is how I am creating shader resource view.
// Create depth stencil texture
D3D11_TEXTURE2D_DESC descDepth;
ZeroMemory( &descDepth, sizeof(descDepth) );
descDepth.Width = width;
descDepth.Height = height;
descDepth.MipLevels = 1;
descDepth.ArraySize = 1;
descDepth.Format = DXGI_FORMAT_R32_TYPELESS;
descDepth.SampleDesc.Count = 1;
descDepth.SampleDesc.Quality = 0;
descDepth.Usage = D3D11_USAGE_DEFAULT;
descDepth.BindFlags = D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE;
descDepth.CPUAccessFlags = 0;
descDepth.MiscFlags = 0;
hr = g_pd3dDevice->CreateTexture2D( &descDepth, NULL, &g_pDepthStencil );
if( FAILED( hr ) )
return hr;
// Create the depth stencil view
D3D11_DEPTH_STENCIL_VIEW_DESC descDSV;
ZeroMemory( &descDSV, sizeof(descDSV) );
descDSV.Format = DXGI_FORMAT_D32_FLOAT;
descDSV.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2D;
descDSV.Texture2D.MipSlice = 0;
hr = g_pd3dDevice->CreateDepthStencilView( g_pDepthStencil, &descDSV, &g_pDepthStencilView );
if( FAILED( hr ) )
return hr;
// Create depth shader resource view.
D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc;
ZeroMemory(&srvDesc,sizeof(D3D11_SHADER_RESOURCE_VIEW_DESC));
srvDesc.Format=DXGI_FORMAT_R32_UINT;
srvDesc.ViewDimension=D3D11_SRV_DIMENSION_TEXTURE2D;
srvDesc.Texture2D.MostDetailedMip=0;
srvDesc.Texture2D.MipLevels=1;
hr=g_pd3dDevice->CreateShaderResourceView(g_pDepthStencil,&srvDesc,&g_pDepthSRV);
if(FAILED(hr))
return hr;
I have tried all the formats mentioned here in combination with the hlsl texture formats float, float4, uint, uint4 with no success. Any idea?
Replace DXGI_FORMAT_R32_UINT by DXGI_FORMAT_R32_FLOAT for your shader resource view, since you use R32_Typeless, you have a floating point buffer.
Texture2D gDepthTextures will be the one you need to load or sample the depth later.
Also it looks like that your texture is not bound properly to your compute shader (since runtime tells you you have a buffer bound in there).
Make sure you have:
immediateContext->CSSetShaderResources(3,1,g_pDepthSRV);
Called before your dispatch.
As a side note, to debug those type of issues, you can also call CSGetShaderResources (and other equivalent), in order to check what's bound in your pipeline before your call.

How to use UpdateSubresource to update the texture in Direct3d

I have in my CreateDeviceResources method the following code: (the method is called at initial once).
What do i need to do to create a method change the texture?
void SetTexture(...textureinput...);
Do i need to run below code everytime a texture needs to be changed? or can I somehow just change some data in memory?
I have found that I would like to use ID3D11DeviceContext::UpdateSubresource, but havent been able to find a sample on how to use it.
auto textureData = reader->ReadData("SIn.Win8\\texturedata.bin");
D3D11_SUBRESOURCE_DATA textureSubresourceData = {0};
textureSubresourceData.pSysMem = textureData->Data;
// Specify the size of a row in bytes, known a priori about the texture data.
textureSubresourceData.SysMemPitch = 1024;
// As this is not a texture array or 3D texture, this parameter is ignored.
textureSubresourceData.SysMemSlicePitch = 0;
// Create a texture description from information known a priori about the data.
// Generalized texture loading code can be found in the Resource Loading sample.
D3D11_TEXTURE2D_DESC textureDesc = {0};
textureDesc.Width = 256;
textureDesc.Height = 256;
textureDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
textureDesc.Usage = D3D11_USAGE_DEFAULT;
textureDesc.CPUAccessFlags = 0;
textureDesc.MiscFlags = 0;
// Most textures contain more than one MIP level. For simplicity, this sample uses only one.
textureDesc.MipLevels = 1;
// As this will not be a texture array, this parameter is ignored.
textureDesc.ArraySize = 1;
// Don't use multi-sampling.
textureDesc.SampleDesc.Count = 1;
textureDesc.SampleDesc.Quality = 0;
// Allow the texture to be bound as a shader resource.
textureDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
ComPtr<ID3D11Texture2D> texture;
DX::ThrowIfFailed(
m_d3dDevice->CreateTexture2D(
&textureDesc,
&textureSubresourceData,
&texture
)
);
// Once the texture is created, we must create a shader resource view of it
// so that shaders may use it. In general, the view description will match
// the texture description.
D3D11_SHADER_RESOURCE_VIEW_DESC textureViewDesc;
ZeroMemory(&textureViewDesc, sizeof(textureViewDesc));
textureViewDesc.Format = textureDesc.Format;
textureViewDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
textureViewDesc.Texture2D.MipLevels = textureDesc.MipLevels;
textureViewDesc.Texture2D.MostDetailedMip = 0;
ComPtr<ID3D11ShaderResourceView> textureView;
DX::ThrowIfFailed(
m_d3dDevice->CreateShaderResourceView(
texture.Get(),
&textureViewDesc,
&textureView
)
);
// Once the texture view is created, create a sampler. This defines how the color
// for a particular texture coordinate is determined using the relevant texture data.
D3D11_SAMPLER_DESC samplerDesc;
ZeroMemory(&samplerDesc, sizeof(samplerDesc));
samplerDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR;
// The sampler does not use anisotropic filtering, so this parameter is ignored.
samplerDesc.MaxAnisotropy = 0;
// Specify how texture coordinates outside of the range 0..1 are resolved.
samplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP;
// Use no special MIP clamping or bias.
samplerDesc.MipLODBias = 0.0f;
samplerDesc.MinLOD = 0;
samplerDesc.MaxLOD = D3D11_FLOAT32_MAX;
// Don't use a comparison function.
samplerDesc.ComparisonFunc = D3D11_COMPARISON_NEVER;
// Border address mode is not used, so this parameter is ignored.
samplerDesc.BorderColor[0] = 0.0f;
samplerDesc.BorderColor[1] = 0.0f;
samplerDesc.BorderColor[2] = 0.0f;
samplerDesc.BorderColor[3] = 0.0f;
ComPtr<ID3D11SamplerState> sampler;
DX::ThrowIfFailed(
m_d3dDevice->CreateSamplerState(
&samplerDesc,
&sampler
)
);
If you want to update the same texture at runtime, you need to use Map
Maptype needs to be D3D11_MAP_WRITE_DISCARD
Also your texture needs to be created with the Dynamic flag instead of default, and cpu access flag needs to be set to D3D11_CPU_ACCESS_WRITE
If gives you access to D3D11_MAPPED_SUBRESOURCE , and you can set the new data using pData
Depending on the case it can be nicer to just recreate a texture, it's on a case by case basis (dynamic is nice if texture changes often)