Unable to read depth buffer from Compute shader - hlsl

I am unable to read depth buffer from compute shader.
I am using this in my hlsl code.
Texture2D<float4> gDepthTextures : register(t3);
// tried this.
//Texture2D<float> gDepthTextures : register(t3);
// and this.
//Texture2D<uint> gDepthTextures : register(t3);
// and this.
//Texture2D<uint4> gDepthTextures : register(t3);
And doing this to read the buffer.
outputTexture[dispatchThreadId.xy]=gDepthTextures.Load(int3(dispatchThreadId.xy,0));
And I am detaching depth buffer from render target.
ID3D11RenderTargetView *nullView[3]={NULL,NULL,NULL};
g_pImmediateContext->OMSetRenderTargets( 3, nullView, NULL );
Still I am getting this error in output.
*D3D11 ERROR: ID3D11DeviceContext::Dispatch: The Shader Resource View dimension declared in the shader code (TEXTURE2D) does not match the view type bound to slot 3 of the Compute Shader unit (BUFFER). This mismatch is invalid if the shader actually uses the view (e.g. it is not skipped due to shader code branching). [ EXECUTION ERROR #354: DEVICE_DRAW_VIEW_DIMENSION_MISMATCH]*
This is how I am creating shader resource view.
// Create depth stencil texture
D3D11_TEXTURE2D_DESC descDepth;
ZeroMemory( &descDepth, sizeof(descDepth) );
descDepth.Width = width;
descDepth.Height = height;
descDepth.MipLevels = 1;
descDepth.ArraySize = 1;
descDepth.Format = DXGI_FORMAT_R32_TYPELESS;
descDepth.SampleDesc.Count = 1;
descDepth.SampleDesc.Quality = 0;
descDepth.Usage = D3D11_USAGE_DEFAULT;
descDepth.BindFlags = D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE;
descDepth.CPUAccessFlags = 0;
descDepth.MiscFlags = 0;
hr = g_pd3dDevice->CreateTexture2D( &descDepth, NULL, &g_pDepthStencil );
if( FAILED( hr ) )
return hr;
// Create the depth stencil view
D3D11_DEPTH_STENCIL_VIEW_DESC descDSV;
ZeroMemory( &descDSV, sizeof(descDSV) );
descDSV.Format = DXGI_FORMAT_D32_FLOAT;
descDSV.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2D;
descDSV.Texture2D.MipSlice = 0;
hr = g_pd3dDevice->CreateDepthStencilView( g_pDepthStencil, &descDSV, &g_pDepthStencilView );
if( FAILED( hr ) )
return hr;
// Create depth shader resource view.
D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc;
ZeroMemory(&srvDesc,sizeof(D3D11_SHADER_RESOURCE_VIEW_DESC));
srvDesc.Format=DXGI_FORMAT_R32_UINT;
srvDesc.ViewDimension=D3D11_SRV_DIMENSION_TEXTURE2D;
srvDesc.Texture2D.MostDetailedMip=0;
srvDesc.Texture2D.MipLevels=1;
hr=g_pd3dDevice->CreateShaderResourceView(g_pDepthStencil,&srvDesc,&g_pDepthSRV);
if(FAILED(hr))
return hr;
I have tried all the formats mentioned here in combination with the hlsl texture formats float, float4, uint, uint4 with no success. Any idea?

Replace DXGI_FORMAT_R32_UINT by DXGI_FORMAT_R32_FLOAT for your shader resource view, since you use R32_Typeless, you have a floating point buffer.
Texture2D gDepthTextures will be the one you need to load or sample the depth later.
Also it looks like that your texture is not bound properly to your compute shader (since runtime tells you you have a buffer bound in there).
Make sure you have:
immediateContext->CSSetShaderResources(3,1,g_pDepthSRV);
Called before your dispatch.
As a side note, to debug those type of issues, you can also call CSGetShaderResources (and other equivalent), in order to check what's bound in your pipeline before your call.

Related

DirectX - Nothing renders after enabling depth buffer

I was trying to implement depth buffer into my renderer in DirectX 11.0, but I encountered specyfic problem. I'm new in DirectX so it might be stupid question, but I can't fix it by myself. I checked many tutorials about this topic and each show how to do this more or less the same.
I have got two triangles on the scene. When I enable depth everythink disappers and I have got blue screen (background color) only.
To enable depth buffer I firstly created "Depth Stencil Texture Description" and created "Depth Stencil Buffer" with "Depth Stencil View". Then as last parameter of function OMSetRenderTargets I set DepthStencilView. After that I created "Depth Stencil State".
D3D11_TEXTURE2D_DESC depthStencilTextureDesc;
depthStencilTextureDesc.Width = width;
depthStencilTextureDesc.Height = height;
depthStencilTextureDesc.MipLevels = 1;
depthStencilTextureDesc.ArraySize = 1;
depthStencilTextureDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT;
depthStencilTextureDesc.SampleDesc.Count = 1;
depthStencilTextureDesc.SampleDesc.Quality = 0;
depthStencilTextureDesc.Usage = D3D11_USAGE_DEFAULT;
depthStencilTextureDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL;
depthStencilTextureDesc.CPUAccessFlags = 0;
depthStencilTextureDesc.MiscFlags = 0;
hr = Device->CreateTexture2D(&depthStencilTextureDesc, nullptr, DepthStencilBuffer.GetAddressOf());
if (FAILED(hr))
{
Logger::Error("Error creating depth stencil buffer!");
return false;
}
hr = Device->CreateDepthStencilView(DepthStencilBuffer.Get(), nullptr, DepthStencilView.GetAddressOf());
if (FAILED(hr))
{
Logger::Error("Error creating depth stencil view!");
return false;
}
Logger::Debug("Successfully created depth stencil buffer and view.");
DeviceContext->OMSetRenderTargets(1, RenderTargetView.GetAddressOf(), DepthStencilView.Get());
Logger::Debug("Binding render target output merge successfully.");
D3D11_DEPTH_STENCIL_DESC depthStencilDesc;
ZeroMemory(&depthStencilDesc, sizeof(D3D11_DEPTH_STENCIL_DESC));
depthStencilDesc.DepthEnable = true;
depthStencilDesc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL;
depthStencilDesc.DepthFunc = D3D11_COMPARISON_LESS_EQUAL;
hr = Device->CreateDepthStencilState(&depthStencilDesc, DepthStencilState.GetAddressOf());
if (FAILED(hr))
{
Logger::Error("Error creating depth stencil state!");
return false;
}
Then I set viewport depth with this code:
viewport.MinDepth = 0.0f;
viewport.MaxDepth = 1.0f;
Then I moved to my Render function and added clearing depth stencil and setting state like this:
...
DeviceContext->ClearDepthStencilView(DepthStencilView.Get(), D3D11_CLEAR_DEPTH | D3D11_CLEAR_STENCIL, 1.0f, 0);
...
DeviceContext->OMSetDepthStencilState(DepthStencilState.Get(), 0);
And... It doesn't work. If change last parameter of OMSetRenderTargets from DepthStencilView.Get() to nullptr it works. So it seams like I did somethink wrong with depth stencil, but I'm not sure what. I created gist for this Renderer.cpp HERE. Please help me solve this, becase I'm stucked in this and I don't know what to do.
When creating a Depth/Stencil View, make sure that the MSAA settings for Sample and Count are the same for both the Render Target View and the Depth Stencil View.
The DSV may need additional information when being created for an MSAA target. Here is an example of how my DSV is created (note that I am not using the Stencil Buffer and instead chose to get more precision on my depth buffer):
//Describe our Depth/Stencil Buffer
D3D11_TEXTURE2D_DESC depthStencilDesc;
depthStencilDesc.Width = activeDisplayMode.Width;
depthStencilDesc.Height = activeDisplayMode.Height;
depthStencilDesc.MipLevels = 1;
depthStencilDesc.ArraySize = 1;
depthStencilDesc.Format = DXGI_FORMAT_R32_TYPELESS;
depthStencilDesc.SampleDesc.Count = sampleLevel;
depthStencilDesc.SampleDesc.Quality = qualityLevel;
depthStencilDesc.Usage = D3D11_USAGE_DEFAULT;
depthStencilDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL;
depthStencilDesc.CPUAccessFlags = 0;
depthStencilDesc.MiscFlags = 0;
if (MSAAEnabled == true)
{
//Need a DSVDesc to let it know to use MSAA
D3D11_DEPTH_STENCIL_VIEW_DESC depthStencilViewDesc;
ZeroMemory(&depthStencilViewDesc, sizeof(D3D11_DEPTH_STENCIL_VIEW_DESC));
depthStencilViewDesc.Format = DXGI_FORMAT_D32_FLOAT;
depthStencilViewDesc.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2DMS;
depthStencilViewDesc.Texture2D.MipSlice = 0;
dev->CreateTexture2D(&depthStencilDesc, NULL, &depthStencilBuffer);
dev->CreateDepthStencilView(depthStencilBuffer, &depthStencilViewDesc, &depthStencilView);
}
else
{
//Don't need a DSVDesc
dev->CreateTexture2D(&depthStencilDesc, NULL, &depthStencilBuffer);
dev->CreateDepthStencilView(depthStencilBuffer, NULL, &depthStencilView);
}
I will summerize what I found with GaleRazorwind help. I fixed my problem by setting depthStencilTextureDesc multisampling values to the same values from BufferDesc.

Convert IMFSample* to ID3D11ShaderResourceView*

I am new to DirectX an I am trying to do a simple application that reads a video and display it on a Quad.
I read the video using Windows Media Foundation (IMFSourceReader), that sends me a callback when a sample is decoded (IMFSample).
I want to convert this IMFSample* to a ID3D11ShaderResourceView* in order to use it as a texture to draw my quad, however the conversion fails.
Here is what I do (I removed non relevant error checks):
HRESULT SourceReaderCB::OnReadSample(HRESULT hrStatus, DWORD dwStreamIndex, DWORD dwStreamFlags, LONGLONG llTimestamp, IMFSample *pSample)
{
...
DWORD NumBuffers = 0;
hr = pSample->GetBufferCount(&NumBuffers);
if (FAILED(hr) || NumBuffers < 1)
{
...
}
IMFMediaBuffer* SourceMediaPtr = nullptr;
hr = pSample->GetBufferByIndex(0, &SourceMediaPtr);
if (FAILED(hr))
{
...
}
ComPtr<IMFMediaBuffer> _pInputBuffer = SourceMediaPtr;
ComPtr<IMF2DBuffer2> _pInputBuffer2D2;
bool isVideoFrame = (_pInputBuffer.As(&_pInputBuffer2D2) == S_OK);
if (isVideoFrame)
{
IMFDXGIBuffer* pDXGIBuffer = NULL;
ID3D11Texture2D* pSurface = NULL;
hr = _pInputBuffer->QueryInterface(__uuidof(IMFDXGIBuffer), (LPVOID*)&pDXGIBuffer);
if (FAILED(hr))
{
SafeRelease(&SourceMediaPtr);
goto done;
}
hr = pDXGIBuffer->GetResource(__uuidof(ID3D11Texture2D), (LPVOID*)&pSurface);
if (FAILED(hr))
{
...
}
ID3D11ShaderResourceView* resourceView;
if (pSurface)
{
D3D11_TEXTURE2D_DESC textureDesc;
pSurface->GetDesc(&textureDesc);
D3D11_SHADER_RESOURCE_VIEW_DESC shaderResourceViewDesc;
shaderResourceViewDesc.Format = DXGI_FORMAT_R8_UNORM;
shaderResourceViewDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
shaderResourceViewDesc.Texture2D.MostDetailedMip = 0;
shaderResourceViewDesc.Texture2D.MipLevels = 1;
ID3D11ShaderResourceView* resourceView;
hr = d3d11device->CreateShaderResourceView(pSurface, &shaderResourceViewDesc, &resourceView);
if (FAILED(hr))
{
... // CODE FAILS HERE
}
...
}
}
}
My first issue is that I set the shaderResourceViewDesc.Format as DXGI_FORMAT_R8_UNORM which will probably just give me red image (I will have to investigate this later).
The second and blocking issue I am facing ius that the conversion of ID3D11Texture2D to ID3D11ShaderResourceView fails with following error message:
ID3D11Device::CreateShaderResourceView: A ShaderResourceView cannot be created of a Resource that did not specify the D3D11_BIND_SHADER_RESOURCE BindFlag. [ STATE_CREATION ERROR #129: CREATESHADERRESOURCEVIEW_INVALIDRESOURCE]
I understand that there is a flag missing at the creation of the texture that prevents me to do what I want to do, but as the data buffer is created by WMF, I am not sure what I am supposed to do to fix this issue.
Thanks for your help
I see you code, and I can say that your way is wrong - no offense. Firstly, video decoder creates simple texture - in you situation DirectX11 texture - it is a regular texture - it is not shader resource, as a result it cannot be used in shader code. In my view, there are two way for resolving of your task:
Research - Walkthrough: Using MF to render video in a Direct3D app - this link present way for "Walkthrough: Using Microsoft Media Foundation for Windows Phone 8" - from your code I see that you try write solution for WindowsStore - UWP and code for Windows Phone is workable - this code needs MediaEnginePlayer - The MediaEnginePlayer class serves as a helper class that wraps the MF APIs;
Find on GitHub Windows-classic-samples and find in that DX11VideoRenderer - this is full code of Media Foundation renderer with DirectX11 - it includes very good example for using of DirectX11 Video Processor which does blitting of regular video texture from decoder into the rendering video texture of swap-chain:
2.1. Get rendering texture from Swap Chain:
// Get Backbuffer
hr = m_pSwapChain1->GetBuffer(0, __uuidof(ID3D11Texture2D), (void**)&pDXGIBackBuffer);
if (FAILED(hr))
{
break;
}
2.2. Create from rendering texture output view of video processor:
//
// Create Output View of Output Surfaces.
//
D3D11_VIDEO_PROCESSOR_OUTPUT_VIEW_DESC OutputViewDesc;
ZeroMemory( &OutputViewDesc, sizeof( OutputViewDesc ) );
if (m_b3DVideo && m_bStereoEnabled)
{
OutputViewDesc.ViewDimension = D3D11_VPOV_DIMENSION_TEXTURE2DARRAY;
}
else
{
OutputViewDesc.ViewDimension = D3D11_VPOV_DIMENSION_TEXTURE2D;
}
OutputViewDesc.Texture2D.MipSlice = 0;
OutputViewDesc.Texture2DArray.MipSlice = 0;
OutputViewDesc.Texture2DArray.FirstArraySlice = 0;
if (m_b3DVideo && 0 != m_vp3DOutput)
{
OutputViewDesc.Texture2DArray.ArraySize = 2; // STEREO
}
QueryPerformanceCounter(&lpcStart);
hr = m_pDX11VideoDevice->CreateVideoProcessorOutputView(pDXGIBackBuffer, m_pVideoProcessorEnum, &OutputViewDesc, &pOutputView);
2.3. Create from regular decoder video texture input view for video processor:
D3D11_VIDEO_PROCESSOR_INPUT_VIEW_DESC InputLeftViewDesc;
ZeroMemory( &InputLeftViewDesc, sizeof( InputLeftViewDesc ) );
InputLeftViewDesc.FourCC = 0;
InputLeftViewDesc.ViewDimension = D3D11_VPIV_DIMENSION_TEXTURE2D;
InputLeftViewDesc.Texture2D.MipSlice = 0;
InputLeftViewDesc.Texture2D.ArraySlice = dwLeftViewIndex;
hr = m_pDX11VideoDevice->CreateVideoProcessorInputView(pLeftTexture2D, m_pVideoProcessorEnum, &InputLeftViewDesc, &pLeftInputView);
if (FAILED(hr))
{
break;
}
2.4. Do blitting of regular decoder video texture on rendering texture from Swap Chain:
D3D11_VIDEO_PROCESSOR_STREAM StreamData;
ZeroMemory( &StreamData, sizeof( StreamData ) );
StreamData.Enable = TRUE;
StreamData.OutputIndex = 0;
StreamData.InputFrameOrField = 0;
StreamData.PastFrames = 0;
StreamData.FutureFrames = 0;
StreamData.ppPastSurfaces = NULL;
StreamData.ppFutureSurfaces = NULL;
StreamData.pInputSurface = pLeftInputView;
StreamData.ppPastSurfacesRight = NULL;
StreamData.ppFutureSurfacesRight = NULL;
if (m_b3DVideo && MFVideo3DSampleFormat_MultiView == m_vp3DOutput && pRightTexture2D)
{
StreamData.pInputSurfaceRight = pRightInputView;
}
hr = pVideoContext->VideoProcessorBlt(m_pVideoProcessor, pOutputView, 0, 1, &StreamData );
if (FAILED(hr))
{
break;
}
Yes, they are sections of complex code, and it needs research whole DX11VideoRenderer project for understanding of it - it will take huge amount of time.
Regards,
Evgeny Pereguda
Debug output suggests that the texture is not compatible, as it was created without D3D11_BIND_SHADER_RESOURCE flag (specified in BindFlag field of D3D11_TEXTURE2D_DESC structure.
You read the texture already created by Media Foundation primitive. In some cases you can alter the creation flags, however the general case is that you need to create a compatible texture on your own, copy the data between the textures, and then call CreateShaderResourceView method with your texture as an argument rather than original texture.

directX 11 scale texture2D

I would like to scale a texture I created from AcquireNextFrame.
Here is my code :
if (gRealTexture == nullptr) {
D3D11_TEXTURE2D_DESC description;
texture2D->GetDesc(&description);
description.BindFlags = 0;
description.CPUAccessFlags = D3D11_CPU_ACCESS_READ | D3D11_CPU_ACCESS_WRITE;
description.Usage = D3D11_USAGE_STAGING;
description.MiscFlags = 0;
hr = gDevice->CreateTexture2D(&description, NULL, &gRealTexture);
if (FAILED(hr)) {
if (gRealTexture) {
gRealTexture->Release();
gRealTexture = nullptr;
}
return NULL;
}
}
gImmediateContext->CopyResource(gRealTexture, texture2D);
D3D11_MAPPED_SUBRESOURCE mapped;
hr = gImmediateContext->Map(gRealTexture, 0, D3D11_MAP_READ_WRITE, 0, &mapped);
if (FAILED(hr)) {
gRealTexture->Release();
gRealTexture = NULL;
return NULL;
}
unsigned char *source = static_cast<unsigned char *>(mapped.pData); //Here I get the pixel buffer data
Problem is that it is in Full HD Resolution (1920x1080). I would like to decrease the resolution (1280x720 for example) because I need to send source over network. And I don't really need a Full HD image.
Is it possible to do it with DirectX easily before I get the pixel buffer ?
Thanks
Create a smaller resolution texture, render the texture2d to the smaller texture using a fullscreen quad (but shrinking to the new size). Make sure bilinear filtering is on.
Then copy and map the smaller texture. CopyResource needs the same dimensions.
Some resources:
DX11 Helper Library
Render To Texture
DX11 Post Processing blur
The blur setup is the same as what I'm talking about, only instead of using a blur shader you use a simple texture shader.

Depth buffer as texture - "D3D11 ERROR: The Format is invalid when creating a View"

I am trying to use depth buffer as an texture for second pass in my shader.
According to official documentation ("Reading the Depth-Stencil Buffer as a Texture" paragraph), I've set D3D11_TEXTURE2D_DESC.Format to DXGI_FORMAT_R24G8_TYPELESS (as it was DXGI_FORMAT_D24_UNORM_S8_UINT earlier, when I was not using depth buffer as texture):
D3D11_TEXTURE2D_DESC descDepth;
ZeroMemory(&descDepth, sizeof(descDepth));
descDepth.Width = width;
descDepth.Height = height;
descDepth.MipLevels = 1;
descDepth.ArraySize = 1;
descDepth.Format = DXGI_FORMAT_R24G8_TYPELESS; //normally it was DXGI_FORMAT_D24_UNORM_S8_UINT
descDepth.SampleDesc.Count = antiAliasing.getCount();
descDepth.SampleDesc.Quality = antiAliasing.getQuality();
descDepth.Usage = D3D11_USAGE_DEFAULT;
descDepth.BindFlags = D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE;
descDepth.CPUAccessFlags = 0;
descDepth.MiscFlags = 0;
ID3D11Texture2D* depthStencil = NULL;
result = device->CreateTexture2D(&descDepth, NULL, &depthStencil);
The results succeeded. Then I tried to create shader resource view for my buffer:
D3D11_SHADER_RESOURCE_VIEW_DESC shaderResourceViewDesc;
//setup the description of the shader resource view
shaderResourceViewDesc.Format = DXGI_FORMAT_R32_FLOAT;
shaderResourceViewDesc.ViewDimension = antiAliasing.isOn() ? D3D11_SRV_DIMENSION_TEXTURE2DMS : D3D11_SRV_DIMENSION_TEXTURE2D;
shaderResourceViewDesc.Texture2D.MostDetailedMip = 0;
shaderResourceViewDesc.Texture2D.MipLevels = 1;
//create the shader resource view.
device->CreateShaderResourceView(depthStencil, &shaderResourceViewDesc, &depthStencilShaderResourceView);
Unfortunately, that generates an error:
D3D11 ERROR: ID3D11Device::CreateShaderResourceView: The Format (0x29,
R32_FLOAT) is invalid, when creating a View; it is not a fully
qualified Format castable from the Format of the Resource (0x2c,
R24G8_TYPELESS). [ STATE_CREATION ERROR #127:
CREATESHADERRESOURCEVIEW_INVALIDFORMAT]
D3D11 ERROR: ID3D11Device::CreateShaderResourceView: Returning E_INVALIDARG,
meaning invalid parameters were passed. [ STATE_CREATION ERROR #131:
CREATESHADERRESOURCEVIEW_INVALIDARG_RETURN]
I was also trying with DXGI_FORMAT_R24G8_TYPELESS, DXGI_FORMAT_D24_UNORM_S8_UINT and DXGI_FORMAT_R8G8B8A8_UNORM as shaderResourceViewDesc.Format.
Where is the problem? I was following the documentation and I do not see it.
The formats DXGI_FORMAT_R24G8_TYPELESS and DXGI_FORMAT_R32_FLOAT are not 'cast compatible'. That typeless format is only compatible with DXGI_FORMAT_D24_UNORM_S8_UINT, DXGI_FORMAT_R24_UNORM_X8_TYPELESS, and DXGI_FORMAT_X24_TYPELESS_G8_UINT
DXGI_FORMAT_R32_TYPELESS is compatible with both DXGI_FORMAT_D32_FLOAT and DXGI_FORMAT_R32_FLOAT

How to use UpdateSubresource to update the texture in Direct3d

I have in my CreateDeviceResources method the following code: (the method is called at initial once).
What do i need to do to create a method change the texture?
void SetTexture(...textureinput...);
Do i need to run below code everytime a texture needs to be changed? or can I somehow just change some data in memory?
I have found that I would like to use ID3D11DeviceContext::UpdateSubresource, but havent been able to find a sample on how to use it.
auto textureData = reader->ReadData("SIn.Win8\\texturedata.bin");
D3D11_SUBRESOURCE_DATA textureSubresourceData = {0};
textureSubresourceData.pSysMem = textureData->Data;
// Specify the size of a row in bytes, known a priori about the texture data.
textureSubresourceData.SysMemPitch = 1024;
// As this is not a texture array or 3D texture, this parameter is ignored.
textureSubresourceData.SysMemSlicePitch = 0;
// Create a texture description from information known a priori about the data.
// Generalized texture loading code can be found in the Resource Loading sample.
D3D11_TEXTURE2D_DESC textureDesc = {0};
textureDesc.Width = 256;
textureDesc.Height = 256;
textureDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
textureDesc.Usage = D3D11_USAGE_DEFAULT;
textureDesc.CPUAccessFlags = 0;
textureDesc.MiscFlags = 0;
// Most textures contain more than one MIP level. For simplicity, this sample uses only one.
textureDesc.MipLevels = 1;
// As this will not be a texture array, this parameter is ignored.
textureDesc.ArraySize = 1;
// Don't use multi-sampling.
textureDesc.SampleDesc.Count = 1;
textureDesc.SampleDesc.Quality = 0;
// Allow the texture to be bound as a shader resource.
textureDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
ComPtr<ID3D11Texture2D> texture;
DX::ThrowIfFailed(
m_d3dDevice->CreateTexture2D(
&textureDesc,
&textureSubresourceData,
&texture
)
);
// Once the texture is created, we must create a shader resource view of it
// so that shaders may use it. In general, the view description will match
// the texture description.
D3D11_SHADER_RESOURCE_VIEW_DESC textureViewDesc;
ZeroMemory(&textureViewDesc, sizeof(textureViewDesc));
textureViewDesc.Format = textureDesc.Format;
textureViewDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
textureViewDesc.Texture2D.MipLevels = textureDesc.MipLevels;
textureViewDesc.Texture2D.MostDetailedMip = 0;
ComPtr<ID3D11ShaderResourceView> textureView;
DX::ThrowIfFailed(
m_d3dDevice->CreateShaderResourceView(
texture.Get(),
&textureViewDesc,
&textureView
)
);
// Once the texture view is created, create a sampler. This defines how the color
// for a particular texture coordinate is determined using the relevant texture data.
D3D11_SAMPLER_DESC samplerDesc;
ZeroMemory(&samplerDesc, sizeof(samplerDesc));
samplerDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR;
// The sampler does not use anisotropic filtering, so this parameter is ignored.
samplerDesc.MaxAnisotropy = 0;
// Specify how texture coordinates outside of the range 0..1 are resolved.
samplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP;
// Use no special MIP clamping or bias.
samplerDesc.MipLODBias = 0.0f;
samplerDesc.MinLOD = 0;
samplerDesc.MaxLOD = D3D11_FLOAT32_MAX;
// Don't use a comparison function.
samplerDesc.ComparisonFunc = D3D11_COMPARISON_NEVER;
// Border address mode is not used, so this parameter is ignored.
samplerDesc.BorderColor[0] = 0.0f;
samplerDesc.BorderColor[1] = 0.0f;
samplerDesc.BorderColor[2] = 0.0f;
samplerDesc.BorderColor[3] = 0.0f;
ComPtr<ID3D11SamplerState> sampler;
DX::ThrowIfFailed(
m_d3dDevice->CreateSamplerState(
&samplerDesc,
&sampler
)
);
If you want to update the same texture at runtime, you need to use Map
Maptype needs to be D3D11_MAP_WRITE_DISCARD
Also your texture needs to be created with the Dynamic flag instead of default, and cpu access flag needs to be set to D3D11_CPU_ACCESS_WRITE
If gives you access to D3D11_MAPPED_SUBRESOURCE , and you can set the new data using pData
Depending on the case it can be nicer to just recreate a texture, it's on a case by case basis (dynamic is nice if texture changes often)