How to use UpdateSubresource to update the texture in Direct3d - c++

I have in my CreateDeviceResources method the following code: (the method is called at initial once).
What do i need to do to create a method change the texture?
void SetTexture(...textureinput...);
Do i need to run below code everytime a texture needs to be changed? or can I somehow just change some data in memory?
I have found that I would like to use ID3D11DeviceContext::UpdateSubresource, but havent been able to find a sample on how to use it.
auto textureData = reader->ReadData("SIn.Win8\\texturedata.bin");
D3D11_SUBRESOURCE_DATA textureSubresourceData = {0};
textureSubresourceData.pSysMem = textureData->Data;
// Specify the size of a row in bytes, known a priori about the texture data.
textureSubresourceData.SysMemPitch = 1024;
// As this is not a texture array or 3D texture, this parameter is ignored.
textureSubresourceData.SysMemSlicePitch = 0;
// Create a texture description from information known a priori about the data.
// Generalized texture loading code can be found in the Resource Loading sample.
D3D11_TEXTURE2D_DESC textureDesc = {0};
textureDesc.Width = 256;
textureDesc.Height = 256;
textureDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
textureDesc.Usage = D3D11_USAGE_DEFAULT;
textureDesc.CPUAccessFlags = 0;
textureDesc.MiscFlags = 0;
// Most textures contain more than one MIP level. For simplicity, this sample uses only one.
textureDesc.MipLevels = 1;
// As this will not be a texture array, this parameter is ignored.
textureDesc.ArraySize = 1;
// Don't use multi-sampling.
textureDesc.SampleDesc.Count = 1;
textureDesc.SampleDesc.Quality = 0;
// Allow the texture to be bound as a shader resource.
textureDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
ComPtr<ID3D11Texture2D> texture;
DX::ThrowIfFailed(
m_d3dDevice->CreateTexture2D(
&textureDesc,
&textureSubresourceData,
&texture
)
);
// Once the texture is created, we must create a shader resource view of it
// so that shaders may use it. In general, the view description will match
// the texture description.
D3D11_SHADER_RESOURCE_VIEW_DESC textureViewDesc;
ZeroMemory(&textureViewDesc, sizeof(textureViewDesc));
textureViewDesc.Format = textureDesc.Format;
textureViewDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
textureViewDesc.Texture2D.MipLevels = textureDesc.MipLevels;
textureViewDesc.Texture2D.MostDetailedMip = 0;
ComPtr<ID3D11ShaderResourceView> textureView;
DX::ThrowIfFailed(
m_d3dDevice->CreateShaderResourceView(
texture.Get(),
&textureViewDesc,
&textureView
)
);
// Once the texture view is created, create a sampler. This defines how the color
// for a particular texture coordinate is determined using the relevant texture data.
D3D11_SAMPLER_DESC samplerDesc;
ZeroMemory(&samplerDesc, sizeof(samplerDesc));
samplerDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR;
// The sampler does not use anisotropic filtering, so this parameter is ignored.
samplerDesc.MaxAnisotropy = 0;
// Specify how texture coordinates outside of the range 0..1 are resolved.
samplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP;
// Use no special MIP clamping or bias.
samplerDesc.MipLODBias = 0.0f;
samplerDesc.MinLOD = 0;
samplerDesc.MaxLOD = D3D11_FLOAT32_MAX;
// Don't use a comparison function.
samplerDesc.ComparisonFunc = D3D11_COMPARISON_NEVER;
// Border address mode is not used, so this parameter is ignored.
samplerDesc.BorderColor[0] = 0.0f;
samplerDesc.BorderColor[1] = 0.0f;
samplerDesc.BorderColor[2] = 0.0f;
samplerDesc.BorderColor[3] = 0.0f;
ComPtr<ID3D11SamplerState> sampler;
DX::ThrowIfFailed(
m_d3dDevice->CreateSamplerState(
&samplerDesc,
&sampler
)
);

If you want to update the same texture at runtime, you need to use Map
Maptype needs to be D3D11_MAP_WRITE_DISCARD
Also your texture needs to be created with the Dynamic flag instead of default, and cpu access flag needs to be set to D3D11_CPU_ACCESS_WRITE
If gives you access to D3D11_MAPPED_SUBRESOURCE , and you can set the new data using pData
Depending on the case it can be nicer to just recreate a texture, it's on a case by case basis (dynamic is nice if texture changes often)

Related

Why does DirectXToolkit ruin my depth testing

I'm sure I'm just missing some simple step that I've been too blind to notice so far, but I cannot seem to get depth testing to work at all. This is with DirectX 11.
The code that should set it all up:
DXGI_SWAP_CHAIN_DESC swapDesc = { };
swapDesc.BufferDesc.Width = 0;
swapDesc.BufferDesc.Height = 0;
swapDesc.BufferDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
swapDesc.BufferDesc.RefreshRate.Numerator = 0;
swapDesc.BufferDesc.RefreshRate.Denominator = 1;
swapDesc.BufferDesc.Scaling = DXGI_MODE_SCALING_UNSPECIFIED;
swapDesc.BufferDesc.ScanlineOrdering = DXGI_MODE_SCANLINE_ORDER_UNSPECIFIED;
swapDesc.SampleDesc.Count = 1;
swapDesc.SampleDesc.Quality = 0;
swapDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
swapDesc.BufferCount = 1;
swapDesc.OutputWindow = hwnd;
swapDesc.Windowed = TRUE;
swapDesc.SwapEffect = DXGI_SWAP_EFFECT_DISCARD;
swapDesc.Flags = 0;
UINT flg = 0;
#if MAGE_DEBUG
flg |= D3D11_CREATE_DEVICE_DEBUG;
#endif
GFX_THROW_INFO(D3D11CreateDeviceAndSwapChain(nullptr, D3D_DRIVER_TYPE_HARDWARE, nullptr,
flg,
nullptr, 0,
D3D11_SDK_VERSION, &swapDesc, &mSwap, &mDevice, nullptr,
&mContext));
COMptr<ID3D11Resource> backBuffer;
GFX_THROW_INFO(mSwap->GetBuffer(0, __uuidof(ID3D11Resource), &backBuffer));
GFX_THROW_INFO(mDevice->CreateRenderTargetView(backBuffer.Get(), nullptr, &mTarget));
LOG_INFO("Setting depth stencil dimensions ({}, {})", width, height);
COMptr<ID3D11Texture2D> depthStencil;
D3D11_TEXTURE2D_DESC texDesc = { };
texDesc.Width = width;
texDesc.Height = height;
texDesc.MipLevels = 1;
texDesc.ArraySize = 1;
texDesc.Format = DXGI_FORMAT_D32_FLOAT;
texDesc.SampleDesc.Count = 1;
texDesc.SampleDesc.Quality = 0;
texDesc.Usage = D3D11_USAGE_DEFAULT;
texDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL;
GFX_THROW_INFO(mDevice->CreateTexture2D(&texDesc, nullptr, &depthStencil));
D3D11_DEPTH_STENCIL_DESC depth = { };
depth.DepthEnable = TRUE;
depth.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL;
depth.DepthFunc = D3D11_COMPARISON_LESS;
COMptr<ID3D11DepthStencilState> depthState;
GFX_THROW_INFO(mDevice->CreateDepthStencilState(&depth, &depthState));
D3D11_DEPTH_STENCIL_VIEW_DESC dsvDesc = { };
dsvDesc.Format = DXGI_FORMAT_D32_FLOAT;
dsvDesc.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2D;
dsvDesc.Texture2D.MipSlice = 0;
GFX_THROW_INFO(mDevice->CreateDepthStencilView(depthStencil.Get(), &dsvDesc, &mDepthStencilView));
mContext->OMSetDepthStencilState(depthState.Get(), 1);
mContext->OMSetRenderTargets(1, mTarget.GetAddressOf(), mDepthStencilView.Get());
LOG_INFO("Setting viewport dimensions ({}, {})", width, height);
D3D11_VIEWPORT vp;
vp.Width = (float) width;
vp.Height = (float) height;
vp.MinDepth = 0.0f;
vp.MaxDepth = 1.0f;
vp.TopLeftX = 0.0f;
vp.TopLeftY = 0.0f;
mContext->RSSetViewports(1, &vp);
And of course, before every frame I call the following:
mContext->ClearRenderTargetView(mTarget.Get(), color);
mContext->ClearDepthStencilView(mDepthStencilView.Get(), D3D11_CLEAR_DEPTH, 1.0f, 0);
But unfortunately, the result ends up being this (note that the crysis nanosuit model is behind the goblin head) I believe this could potentially also be why the goblin model is rendering incorrectly even when alone but haven't figured that one out yet.
Example 1
And with just the goblin, looking from an angle
Example 2
If anyone can help me figure out why its not working, I'd greatly appreciate it!
EDIT
After some more frustrating testing I discovered the depth testing was broken because of some test text rendering I was doing with DirectX ToolKit's SpriteBatch and SpriteFont classes. Has anyone come across this issue before? I don't really want/need the toolkit for anything other than text rendering and perhaps loading dds textures so I'm hoping if I want to use those classes I don't need to drastically change my existing code?
DirectX Tool Kit does not 'capture/restore' state like the legacy D3DX9/D3DX10 sprite did. This was inefficient and relied on some hacky back-door functionality to capture the 'state block' for Direct3D 10+. In most cases, you are already going to set the bulk of the commonly used state to set up for the next draw call anyhow.
Instead, I have fully documented all state impacted by each class. You are expected to change all required state after the DirectX Tool Kit object renders. For example, SpriteBatch docs state:
SpriteBatch makes use of the following states:
BlendState
Constant buffer (Vertex Shader stage, slot 0)
DepthStencilState
Index buffer
Input layout
Pixel shader
Primitive topology
RasterizerState
SamplerState (Pixel Shader stage, slot 0)
Shader resources (Pixel Shader stage, slot 0)
Vertex buffer (slot 0)
Vertex shader
So in short, you just need to set the DepthStencilState to what you want to use after you call SpriteBatch::End.
As a general habit for state management, you should set all state you rely on every frame. While in Direct3D 11 the 'last state' at the time you call Present is still there at the start of the next frame, this isn't true of DirectX 12. As such, you should make a habit of at the start of a new frame setting everything like current render target, viewport, render states you expect to be present for your whole scene, etc.
For example, most "HUD" rendering is done last, so the state changes by SpriteBatch would normally be reset on the next frame's start--again, assuming you set up the required state at the start of the frame rather than assuming it remains unchanged over many frames.
TL;DR: Move this code to just after you clear the render target each frame:
mContext->OMSetDepthStencilState(depthState.Get(), 1);
mContext->OMSetRenderTargets(1, mTarget.GetAddressOf(), mDepthStencilView.Get());
D3D11_VIEWPORT vp = { 0.f, 0.f, float(width), float(height), D3D11_MIN_DEPTH, D3D11_MAX_DEPTH };
mContext->RSSetViewports(1, &vp);

DirectX - Nothing renders after enabling depth buffer

I was trying to implement depth buffer into my renderer in DirectX 11.0, but I encountered specyfic problem. I'm new in DirectX so it might be stupid question, but I can't fix it by myself. I checked many tutorials about this topic and each show how to do this more or less the same.
I have got two triangles on the scene. When I enable depth everythink disappers and I have got blue screen (background color) only.
To enable depth buffer I firstly created "Depth Stencil Texture Description" and created "Depth Stencil Buffer" with "Depth Stencil View". Then as last parameter of function OMSetRenderTargets I set DepthStencilView. After that I created "Depth Stencil State".
D3D11_TEXTURE2D_DESC depthStencilTextureDesc;
depthStencilTextureDesc.Width = width;
depthStencilTextureDesc.Height = height;
depthStencilTextureDesc.MipLevels = 1;
depthStencilTextureDesc.ArraySize = 1;
depthStencilTextureDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT;
depthStencilTextureDesc.SampleDesc.Count = 1;
depthStencilTextureDesc.SampleDesc.Quality = 0;
depthStencilTextureDesc.Usage = D3D11_USAGE_DEFAULT;
depthStencilTextureDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL;
depthStencilTextureDesc.CPUAccessFlags = 0;
depthStencilTextureDesc.MiscFlags = 0;
hr = Device->CreateTexture2D(&depthStencilTextureDesc, nullptr, DepthStencilBuffer.GetAddressOf());
if (FAILED(hr))
{
Logger::Error("Error creating depth stencil buffer!");
return false;
}
hr = Device->CreateDepthStencilView(DepthStencilBuffer.Get(), nullptr, DepthStencilView.GetAddressOf());
if (FAILED(hr))
{
Logger::Error("Error creating depth stencil view!");
return false;
}
Logger::Debug("Successfully created depth stencil buffer and view.");
DeviceContext->OMSetRenderTargets(1, RenderTargetView.GetAddressOf(), DepthStencilView.Get());
Logger::Debug("Binding render target output merge successfully.");
D3D11_DEPTH_STENCIL_DESC depthStencilDesc;
ZeroMemory(&depthStencilDesc, sizeof(D3D11_DEPTH_STENCIL_DESC));
depthStencilDesc.DepthEnable = true;
depthStencilDesc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL;
depthStencilDesc.DepthFunc = D3D11_COMPARISON_LESS_EQUAL;
hr = Device->CreateDepthStencilState(&depthStencilDesc, DepthStencilState.GetAddressOf());
if (FAILED(hr))
{
Logger::Error("Error creating depth stencil state!");
return false;
}
Then I set viewport depth with this code:
viewport.MinDepth = 0.0f;
viewport.MaxDepth = 1.0f;
Then I moved to my Render function and added clearing depth stencil and setting state like this:
...
DeviceContext->ClearDepthStencilView(DepthStencilView.Get(), D3D11_CLEAR_DEPTH | D3D11_CLEAR_STENCIL, 1.0f, 0);
...
DeviceContext->OMSetDepthStencilState(DepthStencilState.Get(), 0);
And... It doesn't work. If change last parameter of OMSetRenderTargets from DepthStencilView.Get() to nullptr it works. So it seams like I did somethink wrong with depth stencil, but I'm not sure what. I created gist for this Renderer.cpp HERE. Please help me solve this, becase I'm stucked in this and I don't know what to do.
When creating a Depth/Stencil View, make sure that the MSAA settings for Sample and Count are the same for both the Render Target View and the Depth Stencil View.
The DSV may need additional information when being created for an MSAA target. Here is an example of how my DSV is created (note that I am not using the Stencil Buffer and instead chose to get more precision on my depth buffer):
//Describe our Depth/Stencil Buffer
D3D11_TEXTURE2D_DESC depthStencilDesc;
depthStencilDesc.Width = activeDisplayMode.Width;
depthStencilDesc.Height = activeDisplayMode.Height;
depthStencilDesc.MipLevels = 1;
depthStencilDesc.ArraySize = 1;
depthStencilDesc.Format = DXGI_FORMAT_R32_TYPELESS;
depthStencilDesc.SampleDesc.Count = sampleLevel;
depthStencilDesc.SampleDesc.Quality = qualityLevel;
depthStencilDesc.Usage = D3D11_USAGE_DEFAULT;
depthStencilDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL;
depthStencilDesc.CPUAccessFlags = 0;
depthStencilDesc.MiscFlags = 0;
if (MSAAEnabled == true)
{
//Need a DSVDesc to let it know to use MSAA
D3D11_DEPTH_STENCIL_VIEW_DESC depthStencilViewDesc;
ZeroMemory(&depthStencilViewDesc, sizeof(D3D11_DEPTH_STENCIL_VIEW_DESC));
depthStencilViewDesc.Format = DXGI_FORMAT_D32_FLOAT;
depthStencilViewDesc.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2DMS;
depthStencilViewDesc.Texture2D.MipSlice = 0;
dev->CreateTexture2D(&depthStencilDesc, NULL, &depthStencilBuffer);
dev->CreateDepthStencilView(depthStencilBuffer, &depthStencilViewDesc, &depthStencilView);
}
else
{
//Don't need a DSVDesc
dev->CreateTexture2D(&depthStencilDesc, NULL, &depthStencilBuffer);
dev->CreateDepthStencilView(depthStencilBuffer, NULL, &depthStencilView);
}
I will summerize what I found with GaleRazorwind help. I fixed my problem by setting depthStencilTextureDesc multisampling values to the same values from BufferDesc.

Direct3D11: Sharing a texture between devices: black texture

I have two D3D11 devices, each with its own context but on the same adapter.
I am trying to share a texture beween the two, but the texture I recieve on the other side is always black.
HRESULT hr;
// Make a shared texture on device_A / context_A
D3D11_TEXTURE2D_DESC desc;
ZeroMemory(&desc, sizeof(desc));
desc.Width = 1024;
desc.Height = 1024;
desc.MipLevels = 1;
desc.ArraySize = 1;
desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
desc.SampleDesc.Count = 1;
desc.SampleDesc.Quality = 0;
desc.Usage = D3D11_USAGE_DEFAULT;
desc.CPUAccessFlags = 0;
desc.MiscFlags = D3D11_RESOURCE_MISC_SHARED;
desc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET;
ID3D11Texture2D* copy_tex;
hr = device_A->CreateTexture2D(&desc, NULL, &copy_tex);
// Test the texture by filling it with some color
D3D11_RENDER_TARGET_VIEW_DESC rtvd = {};
rtvd.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
rtvd.ViewDimension = D3D11_RTV_DIMENSION_TEXTURE2D;
rtvd.Texture2D.MipSlice = 0;
ID3D11RenderTargetView* copy_tex_view = 0;
hr = device_A->CreateRenderTargetView(copy_tex, &rtvd, &copy_tex_view);
FLOAT clear_color[4] = {1, 0, 0, 1};
context_A->ClearRenderTargetView(copy_tex_view, clear_color);
// Now try to share it to device_B:
IDXGIResource* copy_tex_resource = 0;
hr = copy_tex->QueryInterface( __uuidof(IDXGIResource), (void**)&copy_tex_resource );
HANDLE copy_tex_shared_handle = 0;
hr = copy_tex_resource->GetSharedHandle(&copy_tex_shared_handle);
IDXGIResource* copy_tex_resource_mirror = 0;
hr = device_B->OpenSharedResource(copy_tex_shared_handle, __uuidof(ID3D11Texture2D), (void**)&copy_tex_resource_mirror);
ID3D11Texture2D* copy_tex_mirror = 0;
hr = copy_tex_resource_mirror->QueryInterface(__uuidof(ID3D11Texture2D), (void**)(&copy_tex_mirror));
However: the copy_tex_mirror texture is always black.
I don't get any HRESULT error codes, and can even use copy_tex_mirror on device_B / context_B normally, but I can't get the pixel data that I put into it on device_A.
Am I missing something?
Thanks in advance!
How do you know that the texture is always black? :-)
GPU operations are queued up by Direct3D, so when you open the shared resource on device_B, the ClearRenderTargetView() on device_A might not have been carried out yet. According to the MSDN library documentation on ID3D11Device::OpenSharedResource Method:
If a shared texture is updated on one device ID3D11DeviceContext::Flush must be called on that device.
We had a lot of issues such as this when we implemented shared textures between devices at work. If you add D3D9 or OpenGL to the mix, the pitfalls multiply..

Unable to read depth buffer from Compute shader

I am unable to read depth buffer from compute shader.
I am using this in my hlsl code.
Texture2D<float4> gDepthTextures : register(t3);
// tried this.
//Texture2D<float> gDepthTextures : register(t3);
// and this.
//Texture2D<uint> gDepthTextures : register(t3);
// and this.
//Texture2D<uint4> gDepthTextures : register(t3);
And doing this to read the buffer.
outputTexture[dispatchThreadId.xy]=gDepthTextures.Load(int3(dispatchThreadId.xy,0));
And I am detaching depth buffer from render target.
ID3D11RenderTargetView *nullView[3]={NULL,NULL,NULL};
g_pImmediateContext->OMSetRenderTargets( 3, nullView, NULL );
Still I am getting this error in output.
*D3D11 ERROR: ID3D11DeviceContext::Dispatch: The Shader Resource View dimension declared in the shader code (TEXTURE2D) does not match the view type bound to slot 3 of the Compute Shader unit (BUFFER). This mismatch is invalid if the shader actually uses the view (e.g. it is not skipped due to shader code branching). [ EXECUTION ERROR #354: DEVICE_DRAW_VIEW_DIMENSION_MISMATCH]*
This is how I am creating shader resource view.
// Create depth stencil texture
D3D11_TEXTURE2D_DESC descDepth;
ZeroMemory( &descDepth, sizeof(descDepth) );
descDepth.Width = width;
descDepth.Height = height;
descDepth.MipLevels = 1;
descDepth.ArraySize = 1;
descDepth.Format = DXGI_FORMAT_R32_TYPELESS;
descDepth.SampleDesc.Count = 1;
descDepth.SampleDesc.Quality = 0;
descDepth.Usage = D3D11_USAGE_DEFAULT;
descDepth.BindFlags = D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE;
descDepth.CPUAccessFlags = 0;
descDepth.MiscFlags = 0;
hr = g_pd3dDevice->CreateTexture2D( &descDepth, NULL, &g_pDepthStencil );
if( FAILED( hr ) )
return hr;
// Create the depth stencil view
D3D11_DEPTH_STENCIL_VIEW_DESC descDSV;
ZeroMemory( &descDSV, sizeof(descDSV) );
descDSV.Format = DXGI_FORMAT_D32_FLOAT;
descDSV.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2D;
descDSV.Texture2D.MipSlice = 0;
hr = g_pd3dDevice->CreateDepthStencilView( g_pDepthStencil, &descDSV, &g_pDepthStencilView );
if( FAILED( hr ) )
return hr;
// Create depth shader resource view.
D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc;
ZeroMemory(&srvDesc,sizeof(D3D11_SHADER_RESOURCE_VIEW_DESC));
srvDesc.Format=DXGI_FORMAT_R32_UINT;
srvDesc.ViewDimension=D3D11_SRV_DIMENSION_TEXTURE2D;
srvDesc.Texture2D.MostDetailedMip=0;
srvDesc.Texture2D.MipLevels=1;
hr=g_pd3dDevice->CreateShaderResourceView(g_pDepthStencil,&srvDesc,&g_pDepthSRV);
if(FAILED(hr))
return hr;
I have tried all the formats mentioned here in combination with the hlsl texture formats float, float4, uint, uint4 with no success. Any idea?
Replace DXGI_FORMAT_R32_UINT by DXGI_FORMAT_R32_FLOAT for your shader resource view, since you use R32_Typeless, you have a floating point buffer.
Texture2D gDepthTextures will be the one you need to load or sample the depth later.
Also it looks like that your texture is not bound properly to your compute shader (since runtime tells you you have a buffer bound in there).
Make sure you have:
immediateContext->CSSetShaderResources(3,1,g_pDepthSRV);
Called before your dispatch.
As a side note, to debug those type of issues, you can also call CSGetShaderResources (and other equivalent), in order to check what's bound in your pipeline before your call.

Direct3D: Creating a render target view from a texture

I would like to create a render target view from a texture, for use in multiple target rendering. I am currently able to create a render target view for the back buffer - all of that works nicely. Further, I am able to create the texture. However, when I try to build the view from it, I get an error.
First, here's the code:
D3D11_TEXTURE2D_DESC textureDesc;
ZeroMemory(&textureDesc, sizeof(textureDesc));
textureDesc.ArraySize = 1;
textureDesc.BindFlags = D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE;
textureDesc.CPUAccessFlags = 0;
textureDesc.Format = DXGI_FORMAT_R32_FLOAT;
textureDesc.Height = m_renderTargetSize.Height;
textureDesc.Width = m_renderTargetSize.Width;
textureDesc.MipLevels = 1;
textureDesc.MiscFlags = 0;
textureDesc.SampleDesc.Count = 1;
textureDesc.SampleDesc.Quality = 0;
textureDesc.Usage = D3D11_USAGE_DEFAULT;
ComPtr<ID3D11Texture2D> texture;
DX::ThrowIfFailed(
m_d3dDevice->CreateTexture2D(
&textureDesc,
nullptr,
&texture
)
);
D3D11_RENDER_TARGET_VIEW_DESC renderTargetViewDescription;
ZeroMemory(&renderTargetViewDescription, sizeof(renderTargetViewDescription));
renderTargetViewDescription.Format = textureDesc.Format;
DX::ThrowIfFailed(
m_d3dDevice->CreateRenderTargetView(
texture,
&renderTargetViewDescription,
&m_renderTargetView[1]
)
);
I am getting the following error from the compiler on the line with the call to CreateRenderTargetView:
Error: no suitable conversion function from
"Microsoft::WRL::ComPtr" to "ID3D11Resource *"
exists.
According to MSDN, ID3D11Texture2D inherits from ID3D11Resource. Do I have to somehow upcast first?
I am working in DirectX 11, and compiling with vc110.
It seems that WRL's ComPtr doesn't support implicit conversion to T* (unlike ATL's CComPtr), so you need to use the Get method:
DX::ThrowIfFailed(
m_d3dDevice->CreateRenderTargetView(
texture.Get(),
&renderTargetViewDescription,
&m_renderTargetView[1]
)
);