Direct3D multiple vertex buffers, non interleaved elements - c++

I'm trying to create 2 vertex buffers, one that only stores positions and another that only stores colors. This is just an exercise from Frank Luna's book to become familiar with vertex description, layouts and buffer options. The problem is that I feel like I have made all the relevant changes and although I am getting the geometry to show, the color buffer is not working. To complete the exercise, instead of using the book's Vertex vertices[]={{...},{...},...} from the example code where each Vertex stores the position and the color, I am using XMFLOAT3 pos[]={x,y,z...} for positions and XMFLOAT4 colors[]={a,b,c,...}. I have modified the vertex description to make each element reference the correct input spot:
D3D11_INPUT_ELEMENT_DESC vertexDesc[] =
{
{"POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0},
{"COLOR", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 1, 0, D3D11_INPUT_PER_VERTEX_DATA, 0}
};
I've also changed the vertex buffer on the device to expect two vertex buffers instead of just mBoxVB which previously was a resource based on Vertex vertices[]:
ID3D11Buffer* mBoxVB;
ID3D11Buffer* mBoxVBCol;
ID3D11Buffer* combined[2];
...
combined[0]=mBoxVB;
combined[1]=mBoxVBCol;
...
UINT stride[] = {sizeof(XMFLOAT3), sizeof(XMFLOAT4)};
UINT offset = 0;
md3dImmediateContext->IASetVertexBuffers(0, 2, combined, stride, &offset);
If of any help, this is the buffer creation code:
D3D11_BUFFER_DESC vbd;
vbd.Usage = D3D11_USAGE_IMMUTABLE;
vbd.ByteWidth = sizeof(XMFLOAT3)*8;
vbd.BindFlags = D3D11_BIND_VERTEX_BUFFER;
vbd.CPUAccessFlags = 0;
vbd.MiscFlags = 0;
vbd.StructureByteStride = 0;
D3D11_SUBRESOURCE_DATA vinitData;
vinitData.pSysMem = Shapes::boxEx1Pos;
HR(md3dDevice->CreateBuffer(&vbd, &vinitData, &mBoxVB));
D3D11_BUFFER_DESC cbd;
vbd.Usage = D3D11_USAGE_IMMUTABLE;
vbd.ByteWidth = sizeof(XMFLOAT4)*8;
vbd.BindFlags = D3D11_BIND_VERTEX_BUFFER;
vbd.CPUAccessFlags = 0;
vbd.MiscFlags = 0;
vbd.StructureByteStride = 0;
D3D11_SUBRESOURCE_DATA cinitData;
cinitData.pSysMem = Shapes::boxEx1Col;
HR(md3dDevice->CreateBuffer(&vbd, &cinitData, &mBoxVBCol));
If I use combined[0]=mBoxVBCol, instead of combined[0]=mBoxVB I get a different shape so I know there is some data there but I'm not sure why the Color element is not getting sent through to the vertex shader, it seems to me that the second slot is being ignored in the operation.

I found the answer to the problem. It was simply that although I was specifying the stride array for both buffers, I was only giving one offset, so once I changed UINT offset=0; to UINT offset[]={0,0}; it worked.

Related

DirectX: Draw bitmap image scale up in viewport caused low quality?

I'm using DirectX to draw the images with RGB data in buffer. The fllowing is sumary code:
// create the vertex buffer
D3D11_BUFFER_DESC bd;
ZeroMemory(&bd, sizeof(bd));
bd.Usage = D3D11_USAGE_DYNAMIC; // write access access by CPU and GPU
bd.ByteWidth = sizeOfOurVertices; // size is the VERTEX struct * pW*pH
bd.BindFlags = D3D11_BIND_VERTEX_BUFFER; // use as a vertex buffer
bd.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE; // allow CPU to write in buffer
dev->CreateBuffer(&bd, NULL, &pVBuffer); // create the buffer
//Create Sample for texture
D3D11_SAMPLER_DESC desc;
desc.Filter = D3D11_FILTER_ANISOTROPIC;
desc.MaxAnisotropy = 16;
ID3D11SamplerState *ppSamplerState = NULL;
dev->CreateSamplerState(&desc, &ppSamplerState);
devcon->PSSetSamplers(0, 1, &ppSamplerState);
//Create list vertices from RGB data buffer
pW = bitmapSource->PixelWidth;
pH = bitmapSource->PixelHeight;
OurVertices = new VERTEX[pW*pH];
vIndex = 0;
unsigned char* curP = rgbPixelsBuff;
for (y = 0; y < pH; y++)
{
for (x = 0; x < pW; x++)
{
OurVertices[vIndex].Color.b = *curP++;
OurVertices[vIndex].Color.g = *curP++;
OurVertices[vIndex].Color.r = *curP++;
OurVertices[vIndex].Color.a = *curP++;
OurVertices[vIndex].X = x;
OurVertices[vIndex].Y = y;
OurVertices[vIndex].Z = 0.0f;
vIndex++;
}
}
sizeOfOurVertices = sizeof(VERTEX)* pW*pH;
// copy the vertices into the buffer
D3D11_MAPPED_SUBRESOURCE ms;
devcon->Map(pVBuffer, NULL, D3D11_MAP_WRITE_DISCARD, NULL, &ms); // map the buffer
memcpy(ms.pData, OurVertices, sizeOfOurVertices); // copy the data
devcon->Unmap(pVBuffer, NULL);
// unmap the buffer
// clear the back buffer to a deep blue
devcon->ClearRenderTargetView(backbuffer, D3DXCOLOR(0.0f, 0.2f, 0.4f, 1.0f));
// select which vertex buffer to display
UINT stride = sizeof(VERTEX);
UINT offset = 0;
devcon->IASetVertexBuffers(0, 1, &pVBuffer, &stride, &offset);
// select which primtive type we are using
devcon->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_POINTLIST);
// draw the vertex buffer to the back buffer
devcon->Draw(pW*pH, 0);
// switch the back buffer and the front buffer
swapchain->Present(0, 0);
When the viewport's size is smaller or equal the image's size => everything is ok. But when viewport's size is lager image's size => the image's quality is very bad.
I've searched and tried to use desc.Filter = D3D11_FILTER_ANISOTROPIC;as above code (I've tried to use D3D11_FILTER_MIN_POINT_MAG_MIP_LINEAR or D3D11_FILTER_MIN_LINEAR_MAG_MIP_POINTtoo), but the result is not better. The following images are result of displaying:
Someone can tell me the way to fix it.
Many thanks!
You are drawing each pixel as a point using DirectX. It is normal that when the screen size gets bigger, your points will move apart and the quality will be bad. You should draw a textured quad instead, using a texture that you fill with your RGB data and a pixel shader.

DirectX 11: Model not rendering from Model class

Ok! Here goes. I've updated my code. However, after hours of debugging seemingly perfect code, I can't spot the problem. I've set up multiple breakpoints around the Vertex and Index buffer creation, and the class draw call.
I've created a temporary vtest struct for the purposes of testing. It carries the definition.
struct vtest{
XMFLOAT3 Vertex;
XMFLOAT4 Color;
};
Linked to a proper IA abstraction:
D3D11_INPUT_ELEMENT_DESC InputElementDesc[] = {
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "COLOR", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 },
};
(All of which returns a (HRESULT) S_OK)
D3D11_BUFFER_DESC BufferDescription;
ZeroMemory(&BufferDescription, sizeof(BufferDescription));
BufferDescription.Usage = D3D11_USAGE_DYNAMIC;
BufferDescription.ByteWidth = sizeof(vtest) * sz_vBuffer;
BufferDescription.BindFlags = D3D11_BIND_VERTEX_BUFFER;
BufferDescription.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
BufferDescription.MiscFlags = 0;
D3D11_SUBRESOURCE_DATA SRData;
ZeroMemory(&SRData, sizeof(SRData));
SRData.pSysMem = test;
SRData.SysMemPitch = 0;
SRData.SysMemSlicePitch = 0;
hr = Device->CreateBuffer(&BufferDescription, &SRData, &g_vBuffer);
D3D11_MAPPED_SUBRESOURCE MappedResource;
ZeroMemory(&MappedResource, sizeof(MappedResource));
The vtest struct fills proper, and:
DeviceContext->Map(g_vBuffer, NULL, D3D11_MAP_WRITE_DISCARD, NULL, &MappedResource);
Succeeds, also with (HRESULT) S_OK.
Indices initialized as such:(One-dimensional DWORD array of indices.)
D3D11_BUFFER_DESC iBufferDescription;
ZeroMemory(&iBufferDescription, sizeof(iBufferDescription));
iBufferDescription.Usage = D3D11_USAGE_DEFAULT;
iBufferDescription.ByteWidth = sizeof(DWORD)*sz_iBuffer;
iBufferDescription.BindFlags = D3D11_BIND_INDEX_BUFFER;
iBufferDescription.CPUAccessFlags = NULL;
iBufferDescription.MiscFlags = 0;
D3D11_SUBRESOURCE_DATA iSRData;
iSRData.pSysMem = Indices;
hr = direct3D.Device->CreateBuffer(&iBufferDescription, &iSRData, &g_iBuffer);
The IA Set... calls are in the draw() call:
DeviceContext->IASetVertexBuffers(0, 1, &g_vBuffer, &stride, &Offset);
DeviceContext->IASetIndexBuffer(g_iBuffer, DXGI_FORMAT_R32_UINT, 0);
Other settings: (Edit: Corrected values to show configuration.)
D3D11_RASTERIZER_DESC DrawStyleState;
DrawStyleState.AntialiasedLineEnable = false;
DrawStyleState.CullMode = D3D11_CULL_NONE;
DrawStyleState.DepthBias = 0;
DrawStyleState.FillMode = D3D11_FILL_SOLID;
DrawStyleState.DepthClipEnable = false;
DrawStyleState.MultisampleEnable = true;
DrawStyleState.FrontCounterClockwise = false;
DrawStyleState.ScissorEnable = false;
My Depth Stencil code.
D3D11_TEXTURE2D_DESC DepthStenDescription;
ZeroMemory(&DepthStenDescription, sizeof(D3D11_TEXTURE2D_DESC));
DepthStenDescription.Width = cWidth;
DepthStenDescription.Height = cHeight;
DepthStenDescription.MipLevels = 0;
DepthStenDescription.ArraySize = 1;
DepthStenDescription.Format = DXGI_FORMAT_D24_UNORM_S8_UINT;
DepthStenDescription.SampleDesc.Count = 1;
DepthStenDescription.SampleDesc.Quality = 0;
DepthStenDescription.Usage = D3D11_USAGE_DEFAULT;
DepthStenDescription.BindFlags = D3D11_BIND_DEPTH_STENCIL;
DepthStenDescription.CPUAccessFlags = 0;
DepthStenDescription.MiscFlags = 0;
D3D11_DEPTH_STENCIL_VIEW_DESC DSVDesc;
ZeroMemory(&DSVDesc, sizeof(D3D11_DEPTH_STENCIL_VIEW_DESC));
DSVDesc.Format = DSVDesc.Format;
DSVDesc.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2D;
DSVDesc.Texture2D.MipSlice = 0;
And finally, my entity class draw() method:
void Entity::Draw(){
UINT stride = sizeof(vtest);
UINT Offset = 0;
ObjectSpace = XMMatrixIdentity();
m_Scale = Scale();
m_Rotation = Rotate();
m_Translate = Translate();
ObjectSpace = m_Scale*m_Rotation*m_Translate;
mWVP = ObjectSpace*direct3D.mView*direct3D.mProjection;
LocalWorld.mWorldVP = XMMatrixTranspose(wWVP);
DeviceContext->UpdateSubresource(direct3D.MatrixBuffer, 0, NULL, &LocalWorld, 0, 0);
DeviceContext->VSSetConstantBuffers(0, 1, &direct3D.MatrixBuffer);
DeviceContext->IASetVertexBuffers(0, 1, &g_vBuffer, &stride, &Offset);
DeviceContext->IASetIndexBuffer(g_iBuffer, DXGI_FORMAT_R32_UINT, 0);
DeviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
DeviceContext->DrawIndexed(e_Asset.sz_Index, 0, 0);
}
The code compiles, and the backbuffer presents correctly, but no model.
Initialization of DirectX functions seem to be fine too...
Update From Banex's suggestion, using the Visual Studio DirectX Debugging tools yield that I may have gone wrong in my .hlsl file.
I think also I may have gone wrong at shader initialization, since my shader really is simple, and really works as a vert/pix pass-through file:
After examining the .hlsl file and doing further debugging; setting output.position = position; rather than being multiplied by the world matrix, the model was drawn on screen, implying a bad matrix calculation causing an extreme warp, or null values, stored in the constant buffer.
cbuffer ConstantBuffer:register(b0)
{
float4x4 WVP;
}
struct VOut
{
float4 position : SV_POSITION;
float4 color : COLOR;
};
VOut VShader(float4 position : POSITION, float4 color : COLOR)
{
VOut output;
output.position = position;// mul(position, WVP);
output.color = color;
return output;
}
float4 PShader(float4 position : SV_POSITION, float4 color : COLOR) : SV_TARGET
{
return color;
}

DirectX11 Mapping/Unmapping has no effect on rendered geometry

I have a simple rendering program I just made, but it refuses to draw anything but what I set the initial vertices to. Even if I call Map() and Unmap(), the geometry doesn't seem to change. I have a feeling it has to do with Map() and Unmap(), but I'm not sure. Right now, my initial vertex data consists of one triangle. Then I map the vertex buffer with a new set of vertices which consists of two triangles, but they aren't rendered. Only one triangle is rendered even though I pass in 6 for the vertex count in the draw function. Here is my setup code:
VertexData vertexData[] =
{
{0.0f,0.0f,0.0f,0.0f,0.0f,1.0f,0.0f,0.0f},
{0.0f,1.0f,0.0f,0.0f,0.0f,1.0f,0.0f,0.0f},
{1.0f,1.0f,0.0f,0.0f,0.0f,1.0f,0.0f,0.0f}
};
D3D11_BUFFER_DESC bd;
ZeroMemory(&bd,sizeof(bd));
bd.Usage = D3D11_USAGE_DYNAMIC;
bd.ByteWidth = sizeof(VertexData)*256;
bd.BindFlags = D3D11_BIND_VERTEX_BUFFER;
bd.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
D3D11_SUBRESOURCE_DATA vertexBufferData;
vertexBufferData.pSysMem = vertexData;
vertexBufferData.SysMemPitch = 0;
vertexBufferData.SysMemSlicePitch = 0;
_device->CreateBuffer(&bd, &vertexBufferData, &_renderBuffer);
Mapping function:
data is a vector containing six VertexData structs, for six vertices
D3D11_MAPPED_SUBRESOURCE ms;
ZeroMemory(&ms,sizeof(D3D11_MAPPED_SUBRESOURCE));
_deviceContext->Map(_renderBuffer,NULL,D3D11_MAP_WRITE_DISCARD,NULL,&ms);
memcpy(ms.pData,&data[0],sizeof(VertexData)*data.size());
_deviceContext->Unmap(_renderBuffer,NULL);
And here is my rendering code:
_deviceContext->ClearRenderTargetView(_backBuffer,D3DXCOLOR(0.0f,0.0f,0.0f,1.0f));
_deviceContext->Draw(6,0);
_swapChain->Present(0,0);
EDIT: Disabled backface culling, but the triangle is still not appearing.
Mapping Function
void Render::CopyDataToBuffers(std::vector<VertexData> data)
{
D3D11_MAPPED_SUBRESOURCE ms;
ZeroMemory(&ms,sizeof(D3D11_MAPPED_SUBRESOURCE));
_deviceContext->Map(_renderBuffer,NULL,D3D11_MAP_WRITE_DISCARD,NULL,&ms);
memcpy(ms.pData,&data[0],sizeof(VertexData)*data.size());
_deviceContext->Unmap(_renderBuffer,NULL);
}
Calling of Mapping function
std::vector<VertexData> vertexDataVec;
vertexDataVec.push_back(vertexData[0]);
vertexDataVec.push_back(vertexData[1]);
vertexDataVec.push_back(vertexData[2]);
vertexDataVec.push_back(vertexData[3]);
vertexDataVec.push_back(vertexData[4]);
vertexDataVec.push_back(vertexData[5]);
Render::GetRender().CopyDataToBuffers(vertexDataVec);
To fix the problem, I just had to create a ID3D11RasterizerState and disable culling.
Here is the structure:
ID3D11RasterizerState* rasterizerState = NULL;
D3D11_RASTERIZER_DESC rd =
{
D3D11_FILL_SOLID,
D3D11_CULL_NONE,
TRUE,
1,
10.0F,
1.0F,
TRUE
};
_device->CreateRasterizerState(&rd,&rasterizerState);
_deviceContext->RSSetState(rasterizerState);

How does the VertexBuffer know the type of struct inside it?

// Lock the vertex buffer.
hr = aVertexBuffer->Map(D3D10_MAP_WRITE_DISCARD, 0, (void**)&verticesPtr);
// Copy the data into the vertex buffer.
memcpy(verticesPtr, (void*)vertices, (sizeof(LolAnyDataStructure)* aVertexCount));
In this case I am using the struct LolAnyDataStructure, how would the DX know when you call IASetVertexBuffers where in the struct there texture and position are.
EDIT:
vertexBufferDesc.Usage = D3D10_USAGE_DYNAMIC;
vertexBufferDesc.ByteWidth = sizeof(VertexType)* aVertexCount;
vertexBufferDesc.BindFlags = D3D10_BIND_VERTEX_BUFFER;
vertexBufferDesc.CPUAccessFlags = D3D10_CPU_ACCESS_WRITE;
vertexBufferDesc.MiscFlags = 0;
The vertex buffer is stored as bytes, you need to create an input layout that describes these bytes.
You should have something like this in your code.
// Define the input layout
D3D10_INPUT_ELEMENT_DESC layout[] =
{
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D10_INPUT_PER_VERTEX_DATA, 0 },
};
UINT numElements = sizeof( layout ) / sizeof( layout[0] );
// Create the input layout
D3D10_PASS_DESC PassDesc;
g_pTechnique->GetPassByIndex( 0 )->GetDesc( &PassDesc );
hr = g_pd3dDevice->CreateInputLayout( layout, numElements, PassDesc.pIAInputSignature,
PassDesc.IAInputSignatureSize, &g_pVertexLayout );
In this example, it is the layout that describes the struct.

Can't create vertex buffer

I have a Windows Phone 8 C#/XAML project with DirectX component. I'm trying to rendering some particles. I create a vertex buffer, which I saw it go into the function to create, but when it gets to an update vertex buffer, the buffer is NULL. I have not released the buffers yet. Do you know why this will happen? Does any of the output messages help? Thanks.
Printouts and errors on my Output Window:
'TaskHost.exe' (Win32): Loaded '\Device\HarddiskVolume4\Windows\System32\d3d11_1SDKLayers.dll'. Cannot find or open the PDB file.
D3D11 WARNING: ID3D11Texture2D::SetPrivateData: Existing private data of same name with different size found! [ STATE_SETTING WARNING #55: SETPRIVATEDATA_CHANGINGPARAMS]
Create vertex buffer
D3D11 WARNING: ID3D11DeviceContext::DrawIndexed: The Pixel Shader unit expects a Sampler to be set at Slot 0, but none is bound. This is perfectly valid, as a NULL Sampler maps to default Sampler state. However, the developer may not want to rely on the defaults. [ EXECUTION WARNING #352: DEVICE_DRAW_SAMPLER_NOT_SET]
The thread 0xb64 has exited with code 0 (0x0).
m_vertexBuffer is null
D3D11 WARNING: ID3D11DeviceContext::DrawIndexed: The Pixel Shader unit expects a Sampler to be set at Slot 0, but none is bound. This is perfectly valid, as a NULL Sampler maps to default Sampler state. However, the developer may not want to rely on the defaults. [ EXECUTION WARNING #352: DEVICE_DRAW_SAMPLER_NOT_SET]
m_vertexBuffer is null
In CreateDeviceResources, I call CreateVertexShader, CreateInputLayout, CreatePixelShader, CreateBuffer. Then I get to creating the sampler and vertex buffer, code below:
auto createCubeTask = (createPSTask && createVSTask).then([this] () {
// Create a texture sampler state description.
D3D11_SAMPLER_DESC samplerDesc;
samplerDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR;
samplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.MipLODBias = 0.0f;
samplerDesc.MaxAnisotropy = 1;
samplerDesc.ComparisonFunc = D3D11_COMPARISON_ALWAYS;
samplerDesc.BorderColor[0] = 0;
samplerDesc.BorderColor[1] = 0;
samplerDesc.BorderColor[2] = 0;
samplerDesc.BorderColor[3] = 0;
samplerDesc.MinLOD = 0;
samplerDesc.MaxLOD = D3D11_FLOAT32_MAX;
// Create the texture sampler state.
HRESULT result = m_d3dDevice->CreateSamplerState(&samplerDesc, &m_sampleState);
if(FAILED(result))
{
OutputDebugString(L"Can't CreateSamplerState");
}
//InitParticleSystem();
// Set the maximum number of vertices in the vertex array.
m_vertexCount = m_maxParticles * 6;
// Set the maximum number of indices in the index array.
m_indexCount = m_vertexCount;
// Create the vertex array for the particles that will be rendered.
m_vertices = new VertexType[m_vertexCount];
if(!m_vertices)
{
OutputDebugString(L"Can't create the vertex array for the particles that will be rendered.");
}
else
{
// Initialize vertex array to zeros at first.
int sizeOfVertexType = sizeof(VertexType);
int totalSizeVertex = sizeOfVertexType * m_vertexCount;
memset(m_vertices, 0, totalSizeVertex);
D3D11_SUBRESOURCE_DATA vertexBufferData = {0};
vertexBufferData.pSysMem = m_vertices;
vertexBufferData.SysMemPitch = 0;
vertexBufferData.SysMemSlicePitch = 0;
int sizeOfMVertices = sizeof(m_vertices);
CD3D11_BUFFER_DESC vertexBufferDesc(
totalSizeVertex, // byteWidth
D3D11_BIND_VERTEX_BUFFER, // bindFlags
D3D11_USAGE_DYNAMIC, // D3D11_USAGE usage = D3D11_USAGE_DEFAULT
D3D11_CPU_ACCESS_WRITE, // cpuaccessFlags
0, // miscFlags
0 // structureByteStride
);
OutputDebugString(L"Create vertex buffer\n");
DX::ThrowIfFailed(
m_d3dDevice->CreateBuffer(
&vertexBufferDesc,
&vertexBufferData,
&m_vertexBuffer
)
);
}
unsigned long* indices = new unsigned long[m_indexCount];
if(!indices)
{
OutputDebugString(L"Can't create the index array.");
}
else
{
// Initialize the index array.
for(int i=0; i<m_indexCount; i++)
{
indices[i] = i;
}
// Set up the description of the static index buffer.
// Create the index array.
D3D11_BUFFER_DESC indexBufferDesc;
indexBufferDesc.Usage = D3D11_USAGE_DEFAULT;
indexBufferDesc.ByteWidth = sizeof(unsigned long) * m_indexCount;
indexBufferDesc.BindFlags = D3D11_BIND_INDEX_BUFFER;
indexBufferDesc.CPUAccessFlags = 0;
indexBufferDesc.MiscFlags = 0;
indexBufferDesc.StructureByteStride = 0;
// Give the subresource structure a pointer to the index data.
D3D11_SUBRESOURCE_DATA indexData;
indexData.pSysMem = indices;
indexData.SysMemPitch = 0;
indexData.SysMemSlicePitch = 0;
// Create the index buffer.
DX::ThrowIfFailed(
m_d3dDevice->CreateBuffer(
&indexBufferDesc,
&indexData,
&m_indexBuffer
)
);
// Release the index array since it is no longer needed.
delete [] indices;
indices = 0;
}
});
createCubeTask.then([this] () {
m_loadingComplete = true;
});
}
My vertex is position, tex, and color:
struct VertexType
{
DirectX::XMFLOAT3 position;
DirectX::XMFLOAT2 texture;
DirectX::XMFLOAT4 color;
};
Microsoft::WRL::ComPtr<ID3D11SamplerState> m_sampleState;
VertexType* m_vertices;
Microsoft::WRL::ComPtr<ID3D11Buffer> m_vertexBuffer;
const D3D11_INPUT_ELEMENT_DESC vertexDesc[] =
{
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "COLOR", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 },
};
VertexShader.HLSL:
cbuffer ModelViewProjectionConstantBuffer : register(b0)
{
matrix model;
matrix view;
matrix projection;
};
struct VertexInputType
{
float4 position : POSITION;
float2 tex : TEXCOORD0;
float4 color : COLOR;
};
struct PixelInputType
{
float4 position : SV_POSITION;
float2 tex : TEXCOORD0;
float4 color : COLOR;
};
PixelInputType main(VertexInputType input)
{
PixelInputType output;
// Change the position vector to be 4 units for proper matrix calculations.
input.position.w = 1.0f;
// Calculate the position of the vertex against the world, view, and projection matrices.
output.position = mul(input.position, model);
output.position = mul(output.position, view);
output.position = mul(output.position, projection);
// Store the texture coordinates for the pixel shader.
output.tex = input.tex;
// Store the particle color for the pixel shader.
output.color = input.color;
return output;
}
After the call to: m_d3dDevice->CreateBuffer(&vertexBufferDesc, &vertexBufferData, &m_vertexBuffer) the m_vertexBuffer is not null.
But when I get to my Update() function, m_vertexBuffer is NULL!
D3D11_MAPPED_SUBRESOURCE mappedResource;
if (m_vertexBuffer == nullptr)
{
OutputDebugString(L"m_vertexBuffer is null\n");
}
else
{
// Lock the vertex buffer.
DX::ThrowIfFailed(m_d3dContext->Map(m_vertexBuffer.Get(), 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource));
// Get a pointer to the data in the vertex buffer.
VertexType * verticesPtr = (VertexType*)mappedResource.pData;
//// Copy the data into the vertex buffer.
int sizeOfVertices = sizeof(VertexType) * m_vertexCount;
memcpy(verticesPtr, (void*)m_vertices, sizeOfVertices);
//// Unlock the vertex buffer.
m_d3dContext->Unmap(m_vertexBuffer.Get(), 0);
}
About the sampler, you need to assign it to your pixelshader.
m_d3dContext.Get()->PSSetSamplers(0,1,&m_sampleState);
I was making the incorrect call on m_vertexBuffer when I go Render it.
This fixes the error with the m_vertexBuffer being null.
Previous:
m_d3dContext.Get()->IASetVertexBuffers(0, 1, &m_vertexBuffer, &stride, &offset);
Fix:
m_d3dContext.Get()->IASetVertexBuffers(0, 1, m_vertexBuffer.GetAddressOf(), &stride, &offset);
It doesn't fix the error with the sampler states though:
DrawIndexed: The Pixel Shader unit expects a Sampler to be set at Slot
0