mapped resources and assigning data in direct3d11 - c++

Hi i'm working with shaders and i've just got a quick question about whether or not i can do something here. When mapping data to a buffer i normally see it done like this. Define class or struct to represent whats in the buffer.
class MatrixBuffer
{
public:
D3DXMATRIX worldMatrix;
D3DXMATRIX viewMatrix;
D3DXMATRIX projectionMatrix;
}
then set up a input element description when you make when you create the input layout, i got all that. Now, when i see people update the buffers before rendering, i see it mostly done like this:
MatrixBuffer * dataPtr;
if(FAILED(deviceContext->Map(m_matrixBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource)))
return false;
dataPtr = (MatrixBuffer *) mappedResource.pData;
dataPtr->world = worldMatrix;
dataPtr->view = viewMatrix;
dataPtr->projection = projectionMatrix;
deviceContext->Unmap(m_matrixBuffer, 0);
where the values assigned are actually valid D3DXMATRIX's. seeing as that's okay, would anything go wrong if i just did it like this?
MatrixBuffer* dataPtr;
if(FAILED(deviceContext->Map(m_matrixBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource)))
return false;
dataPtr = (MatrixBuffer *) mappedResource.pData;
*dataPtr = matrices;
deviceContext->Unmap(m_matrixBuffer, 0);
where matrices is a valid filled out MatrixBuffer object? Would there be dire consequences later because of some special direct X handles this or is it fine?

I don't see anything wrong with your version, from a technical perspective.
It might be slightly harder to grasp what actually gets copied there, and from a theoretical viewpoint the resulting code should be pretty much the same -- I would expect that any decent optimizer would notice that the "long" version just copies several successive items, and lump them all into one larger copy.

Try avoiding using mapping as it stalls the pipeline. For stuff that gets updated once a frame it's better to have D3D11_USAGE_DEFAULT buffer and then use UpdateSubresource method to push new data in.
More on that here: http://blogs.msdn.com/b/shawnhar/archive/2008/04/14/stalling-the-pipeline.aspx

Related

DirectX11 IASetVertexBuffers "ID3D11Buffer*" is incompatible with

A problem with returning a pointer from a class
I am making a project for a 3D course that involves utilizing DirectX 11. In assignments that were much smaller in scope we were encouraged to just put all the code in and around the main-file. For this project I wanted more structure, so I decided to subdivide the code into three classes: WindowHandler, D3DHandler and PipelineHandler.
D3DHandler sits in WindowHandler, and PipelineHandler sits in D3DHandler like so:
class WindowHandler
{
private:
HWND window;
MSG message
class D3DHandler()
{
private:
ID3D11Device*;
ID3D11DeviceContext*;
IDXGISwapChain*;
ID3D11RenderTargetView*;
ID3D11Texture2D*;
ID3D11DepthStencilView*;
D3D11_VIEWPORT;
class PipelineHandler()
{
private:
ID3D11VertexShader* vShader;
ID3D11PixelShader* pShader;
ID3D11InputLayout* inputLayout;
ID3D11Buffer* vertexBuffer;
}
}
}
(The classes are divided into their own .h libraries, I just wanted to compress the code a bit)
Everything ran smoothly until I tried to bind my VertexBuffer in my Render() function.
void D3DHandler::Render()
{
float clearColor[] = { 0.0f, 0.0f, 0.0f, 1.0f };
this->immediateContext->ClearRenderTargetView(this->rtv, clearColor); //Clear backbuffer
this->immediateContext->ClearDepthStencilView(this->dsView, D3D11_CLEAR_DEPTH | D3D11_CLEAR_STENCIL, 1.0f, 0);
UINT32 vertexSize = sizeof(Vertex); //Will be the size of 8 floats, x y z, r g b, u v
UINT32 offset = 0;
this->immediateContext->VSSetShader(this->ph.GetVertexShader(), nullptr, 0);
this->immediateContext->IASetVertexBuffers(0, 1, this->ph.GetVertexBuffer(), &vertexSize, &offset);
this->immediateContext->IASetInputLayout(this->ph.GetInputLayout());
this->immediateContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
this->immediateContext->PSSetShader(this->ph.GetPixelShader(), nullptr, 0);
//
// Draw geometry
this->immediateContext->Draw(3, 0);
}
The problem occurs at IASetVertexBuffers, where this->ph.GetVertexBuffer() gives the error
"ID3D11Buffer*" is incomaptible with parameter of type "ID3D11Buffer
*const *"
The Get-function in PipelineHandler returns a pointer to the COM object, and for VertexBuffer it looks like this:
ID3D11Buffer * PipelineHandler::GetVertexBuffer() const
{
return this->vertexBuffer;
}
From what documentation I've found, IASetVertexBuffers expects an array of buffers, so a double pointer. However, in the code I've worked with before, where the ID3D11Buffer * has been global, the same code generated no errors (and yes, most of the code I'm working so far has been wholesale copied from the last assignment in this course, just placed into classes).
I have tried to:
to remove the this-> pointer in front of the ph
Creating a new function called ID3D11Buffer & GetVertexBufferRef() const which returns return *this->vertexBuffer; using that instead.
I yet to try:
Moving vertexBuffer from the PipelineHandler and up to D3DHandler. I have a feeling that would work, but it would also disconnect the vertex buffer from the other objects that I deem to be part of the pipeline and not the context, which is something I would like to avoid.
I am grateful for any feedback I can get. I'm fairly new to programming, so any comments that aren't a solution to my problem are just as welcome.
The problem your are having is that this function signature does not take a ID3D11Buffer* interface pointer. It takes an array of ID3D11Buffer* interface pointers.
this->immediateContext->IASetVertexBuffers(0, 1, this->ph.GetVertexBuffer(), &vertexSize, &offset);
Can be any of the following:
this->immediateContext->IASetVertexBuffers(0, 1, &this->ph.GetVertexBuffer(), &vertexSize, &offset);
-or-
ID3D11Buffer* vb = this->ph.GetVertexBuffer();
this->immediateContext->IASetVertexBuffers(0, 1, &vb, &vertexSize, &offset);
-or-
ID3D11Buffer* vb[] = { this->ph.GetVertexBuffer() };
this->immediateContext->IASetVertexBuffers(0, 1, vb, &vertexSize, &offset);
This is because the Direct3D 11 API supports 'multi-stream' rendering where you have more than one Vertex Buffer referenced in the Input Layout which is merged as it is drawn. The function also takes an array of vertex sizes and offsets, which is why you had to take the address of the values vertexSize and offset instead of just passing by value. See Microsoft Docs for full details on the function signature.
This feature was first introduced back in Direct3D 9, so the best introduction to it is found here. With Direct3D 11, depending on Direct3D hardware feature level you can use up to 16 or 32 slots at once. That said, most basic rendering uses a single stream/VertexBuffer.
You should take a look at this page as to why the 'first' option above can get you into trouble with smart-pointers.
For the 'second' option, I'd recommend using C++11's auto keyboard so it would be: auto vb = this->ph.GetVertexBuffer();
The 'third' option obviously makes most sense if you are using more than one VertexBuffer for multi-stream rendering.
Note that you can never have more than one ID3D11Buffer active for the Index Buffer, so the function signature does in fact take a ID3D11Buffer* in that case and not an array. See Microsoft Docs

Intermittent error when setting matrix uniform for shader using member function

The code below wasn't working for 2 days and after I've fiddled around with it, created new functions, rewrote them, deleted them and now, strangely, it works! I'm afraid there's a fundamental c++ error and my noob brain can't see it.
void Shader::SetMat4(const char* key, const glm::mat4 &value) {
uint loc = glGetUniformLocation(id, key); // id is the program id stored as member var
glUniformMatrix4fv(loc, 1, GL_FALSE, glm::value_ptr(value));
}
If I set the uniform directly in the calling function instead of doing shader->SetMat4 it was working, but this just refused to work.
I suspect it has something to do with the way I'm passing a mat4 reference and in the way value_ptr returns const *mat4::value_type, but I don't see it. And now I'm afraid there's something in my code that will bite me a year from now :|.

DirectX 12 Resource binding

How can I bind resources to different shader stages in D3D12?
I wrote two shaders, one vertexshader and one pixelshader:
Here is the Vertex shader:
//VertexShader.vs
float4 main(float3 posL : POSITION, uniform float4x4 gWVP) : SV_POSITION
{
return mul(float4(posL, 1.0f), gWVP);
}
Here is the Pixelshader:
//PixelShader.ps
float4 main(float4 PosH : SV_POSITION, uniform float4 Color) : SV_Target
{
return Color;
}
If I compile those two shaders with the D3DCompile function and reflect it with the D3DReflect and examine the BoundResorces member in the shader description they both have a constant buffer called $Params with contains the uniform variable respectively. The Problem is that both of those buffers are bound to slot 0. When binding resources I have to use the ID3D12RootSignature interface, which can bind resources to the resource slots. How can I bind the $Params buffer of the vertex shader only to the vertex shader and the $Params buffer of the Pixel shader only to the pixel shader?
Thanks in advance,
Philinator
With DX12, the performant solution here is to create a root signature that meets your application needs, and ideally declare it in your HLSL as well. You don't want to change root signatures often, so you definitely want to make one that works for a large swath of your shaders.
Remember DX12 is Direct3D without training wheels, magic, or shader patching so you just have to do it all explicitly yourself. Ease-of-use is a non-goal for Direct3D 12. There are plenty of good reason to stick with Direct3D 11 unless your application and developer resources merit the extra control Direct3D 12 affords you. With great power comes great responsibility.
I believe you can achieve what you want with something like:
CD3DX12_DESCRIPTOR_RANGE descRange[1];
descRange[0].Init(D3D12_DESCRIPTOR_RANGE_TYPE_CBV, 1, 0);
CD3DX12_ROOT_PARAMETER rootParameters[2];
rootParameters[0].InitAsDescriptorTable(
1, &descRange[0], D3D12_SHADER_VISIBILITY_VERTEX); // b0
rootParameters[1].InitAsDescriptorTable(
1, &descRange[0], D3D12_SHADER_VISIBILITY_PIXEL); // b0
// Create the root signature.
CD3DX12_ROOT_SIGNATURE_DESC rootSignatureDesc(_countof(rootParameters),
rootParameters, 0, nullptr,
D3D12_ROOT_SIGNATURE_FLAG_ALLOW_INPUT_ASSEMBLER_INPUT_LAYOUT);
ComPtr<ID3DBlob> signature;
ComPtr<ID3DBlob> error;
DX::ThrowIfFailed(D3D12SerializeRootSignature(&rootSignatureDesc,
D3D_ROOT_SIGNATURE_VERSION_1, &signature, &error));
DX::ThrowIfFailed(
device->CreateRootSignature(0, signature->GetBufferPointer(),
signature->GetBufferSize(),
IID_PPV_ARGS(m_rootSignature.ReleaseAndGetAddressOf())));

How to unbind a RenderTarget texture so it can be used as input to the next pass

I have a texture, along with its shaderresourceview, to which I render my scene's original image by using it as a RenderTarget.
Like millions before me, I then use it as an input to my next shader so I can blur, do HDR, all of that.
The simple but "new to me" problem is that when I go to call PSSetShaderResources and put the texture into the shader's texture/constant buffer I get this error:
"Resource being set to PS shader resource slot 0 is still bound on output"
...which makes sense, as it may still well be. My question is how do I unbind the texture from being the render target of the initial pass?
I'm already doing this, which I presume sets it back to the original RenderTargetView that DXUT was using at the beginning, but it doesn't seem to free up the texture:
ID3D11RenderTargetView * renderTargetViewArrayTQ[1] = { DXUTGetD3D11RenderTargetView() };
pd3dImmediateContext->OMSetRenderTargets(1, renderTargetViewArray, nullptr);
Assuming your render target is in slot 0:
ID3D11RenderTargetView* nullRTV = nullptr;
d3dContext->OMSetRenderTargets(1, &nullRTV, nullptr);
You need to pass an array of ID3D11RenderTargetView* of length 1, the first of which needs to be nullptr.
When unbinding the texture from the pixel shader, this does NOT work:
ID3D11ShaderResourceView *const pSRV[1] = { NULL };
pd3dImmediateContext->PSSetShaderResources(0, 0, pSRV);
However, this DOES work:
ID3D11ShaderResourceView *const pSRV[1] = { NULL };
pd3dImmediateContext->PSSetShaderResources(0, 1, pSRV);
Put more simply, when setting the resource back to null, you still have to tell it you're passing in one pointer, albeit a null one. Passing a null alone with a zero count appears to do nothing.

Dynamic Constantbuffer or Dynamic Vertex Buffer in c++ and DX11

I have a question realted to meemory usage by using
Dynamic ConstantBuffer vs Constant Buffer updated frequentlky(Using Defualt usagetype) vs Dynamic Vertex Buffer
I always Defined Constnat buffer usage as defualt and updated the changes in realtime like so
Eg 1
D3D11_BUFFER_DESC desc;
desc.Usage = D3D11_USAGE_DEFAULT;
// irrelevant code omitted
void Render()
{
WORLD = XMMatrixTranslation(x,y,z); // x,y,z are chaned dynamiclly
ConstantBuffer cb;
cb.world = WORLD;
devcon->UpdateSubResource(constantBuffer,0,0,&cb,0,0);
// Set the VSSetBuffer and PSSetBuffer
}
But Recently I came across a tutorial from rastertek that used
devcon->Map() and devcon->Unmap() to update them and he had defined the usage as Dynamic
Eg 2
void CreateBuffer(){
D3D11_BUFFER_DESC desc;
desc.Usage = D3D11_USAGE_DYNAMIC; // irrelkavant code ommited
}
void Render()
{
WORLD = XMMatrixTranslation(x,y,z); // x,y,z are chaned dynamiclly
D3D11_MAPPED_SUBRESOURCE mappedRes;
ConstantBuffer *cbPtr;
devcon->Map(constantBuffer,0,D3D11_MAP_WRITE_DISCARD,0,&mappedRes);
cbPtr = (ConstantBuffer*)mappedRes.pData;
cbPtr->World = WORLD;
devcon->UnMap(constantBuffer,0);
}
So the question is .
Is there any performance gain or hit by using Dynamic Constant buffer(eg2) over the the default ConstatnBuffer updatee at runtim(eg1) ??
Please do help me clear this doubt..
Thanks
The answer here like most performance advice is "It depends". Both are valid and it really depends on your content and rendering pattern.
The classic performance reference here is Windows to Reality: Getting the Most out of Direct3D 10 Graphics in Your Games from Gamefest 2007.
If you are dealing with lots of constants, Map of a DYNAMIC Constant Buffer is better if your data is scattered about and is collected as part of the update cycle. If all your constants are already laid out correctly in system memory, then UpdateSubResource is probably better. If you are reusing the same CB many times a frame and Map/Locking it, then you might run into 'rename' limits with Map/Lock that are less problematic with UpdateSuResource so "it depends' is really the answer here.
And of course, all of this goes out the window with DirectX 12 which has an entirely different mechanism for handling the equivalent of dynamic updates.