DirectX 11: Model not rendering from Model class - c++

Ok! Here goes. I've updated my code. However, after hours of debugging seemingly perfect code, I can't spot the problem. I've set up multiple breakpoints around the Vertex and Index buffer creation, and the class draw call.
I've created a temporary vtest struct for the purposes of testing. It carries the definition.
struct vtest{
XMFLOAT3 Vertex;
XMFLOAT4 Color;
};
Linked to a proper IA abstraction:
D3D11_INPUT_ELEMENT_DESC InputElementDesc[] = {
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "COLOR", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 },
};
(All of which returns a (HRESULT) S_OK)
D3D11_BUFFER_DESC BufferDescription;
ZeroMemory(&BufferDescription, sizeof(BufferDescription));
BufferDescription.Usage = D3D11_USAGE_DYNAMIC;
BufferDescription.ByteWidth = sizeof(vtest) * sz_vBuffer;
BufferDescription.BindFlags = D3D11_BIND_VERTEX_BUFFER;
BufferDescription.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
BufferDescription.MiscFlags = 0;
D3D11_SUBRESOURCE_DATA SRData;
ZeroMemory(&SRData, sizeof(SRData));
SRData.pSysMem = test;
SRData.SysMemPitch = 0;
SRData.SysMemSlicePitch = 0;
hr = Device->CreateBuffer(&BufferDescription, &SRData, &g_vBuffer);
D3D11_MAPPED_SUBRESOURCE MappedResource;
ZeroMemory(&MappedResource, sizeof(MappedResource));
The vtest struct fills proper, and:
DeviceContext->Map(g_vBuffer, NULL, D3D11_MAP_WRITE_DISCARD, NULL, &MappedResource);
Succeeds, also with (HRESULT) S_OK.
Indices initialized as such:(One-dimensional DWORD array of indices.)
D3D11_BUFFER_DESC iBufferDescription;
ZeroMemory(&iBufferDescription, sizeof(iBufferDescription));
iBufferDescription.Usage = D3D11_USAGE_DEFAULT;
iBufferDescription.ByteWidth = sizeof(DWORD)*sz_iBuffer;
iBufferDescription.BindFlags = D3D11_BIND_INDEX_BUFFER;
iBufferDescription.CPUAccessFlags = NULL;
iBufferDescription.MiscFlags = 0;
D3D11_SUBRESOURCE_DATA iSRData;
iSRData.pSysMem = Indices;
hr = direct3D.Device->CreateBuffer(&iBufferDescription, &iSRData, &g_iBuffer);
The IA Set... calls are in the draw() call:
DeviceContext->IASetVertexBuffers(0, 1, &g_vBuffer, &stride, &Offset);
DeviceContext->IASetIndexBuffer(g_iBuffer, DXGI_FORMAT_R32_UINT, 0);
Other settings: (Edit: Corrected values to show configuration.)
D3D11_RASTERIZER_DESC DrawStyleState;
DrawStyleState.AntialiasedLineEnable = false;
DrawStyleState.CullMode = D3D11_CULL_NONE;
DrawStyleState.DepthBias = 0;
DrawStyleState.FillMode = D3D11_FILL_SOLID;
DrawStyleState.DepthClipEnable = false;
DrawStyleState.MultisampleEnable = true;
DrawStyleState.FrontCounterClockwise = false;
DrawStyleState.ScissorEnable = false;
My Depth Stencil code.
D3D11_TEXTURE2D_DESC DepthStenDescription;
ZeroMemory(&DepthStenDescription, sizeof(D3D11_TEXTURE2D_DESC));
DepthStenDescription.Width = cWidth;
DepthStenDescription.Height = cHeight;
DepthStenDescription.MipLevels = 0;
DepthStenDescription.ArraySize = 1;
DepthStenDescription.Format = DXGI_FORMAT_D24_UNORM_S8_UINT;
DepthStenDescription.SampleDesc.Count = 1;
DepthStenDescription.SampleDesc.Quality = 0;
DepthStenDescription.Usage = D3D11_USAGE_DEFAULT;
DepthStenDescription.BindFlags = D3D11_BIND_DEPTH_STENCIL;
DepthStenDescription.CPUAccessFlags = 0;
DepthStenDescription.MiscFlags = 0;
D3D11_DEPTH_STENCIL_VIEW_DESC DSVDesc;
ZeroMemory(&DSVDesc, sizeof(D3D11_DEPTH_STENCIL_VIEW_DESC));
DSVDesc.Format = DSVDesc.Format;
DSVDesc.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2D;
DSVDesc.Texture2D.MipSlice = 0;
And finally, my entity class draw() method:
void Entity::Draw(){
UINT stride = sizeof(vtest);
UINT Offset = 0;
ObjectSpace = XMMatrixIdentity();
m_Scale = Scale();
m_Rotation = Rotate();
m_Translate = Translate();
ObjectSpace = m_Scale*m_Rotation*m_Translate;
mWVP = ObjectSpace*direct3D.mView*direct3D.mProjection;
LocalWorld.mWorldVP = XMMatrixTranspose(wWVP);
DeviceContext->UpdateSubresource(direct3D.MatrixBuffer, 0, NULL, &LocalWorld, 0, 0);
DeviceContext->VSSetConstantBuffers(0, 1, &direct3D.MatrixBuffer);
DeviceContext->IASetVertexBuffers(0, 1, &g_vBuffer, &stride, &Offset);
DeviceContext->IASetIndexBuffer(g_iBuffer, DXGI_FORMAT_R32_UINT, 0);
DeviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
DeviceContext->DrawIndexed(e_Asset.sz_Index, 0, 0);
}
The code compiles, and the backbuffer presents correctly, but no model.
Initialization of DirectX functions seem to be fine too...
Update From Banex's suggestion, using the Visual Studio DirectX Debugging tools yield that I may have gone wrong in my .hlsl file.
I think also I may have gone wrong at shader initialization, since my shader really is simple, and really works as a vert/pix pass-through file:

After examining the .hlsl file and doing further debugging; setting output.position = position; rather than being multiplied by the world matrix, the model was drawn on screen, implying a bad matrix calculation causing an extreme warp, or null values, stored in the constant buffer.
cbuffer ConstantBuffer:register(b0)
{
float4x4 WVP;
}
struct VOut
{
float4 position : SV_POSITION;
float4 color : COLOR;
};
VOut VShader(float4 position : POSITION, float4 color : COLOR)
{
VOut output;
output.position = position;// mul(position, WVP);
output.color = color;
return output;
}
float4 PShader(float4 position : SV_POSITION, float4 color : COLOR) : SV_TARGET
{
return color;
}

Related

DirectX 11 Not Drawing Small Vertex Buffer

I have a problem with my DirectX app. It can't draw primitives like D3D11_PRIMITIVE_TOPOLOGY_POINTLIST or D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST when vertices count relatively small value(3 Points to 100 points). Over thousand vertices, it suddenly draws lots of primitives. But not sure it draws them all.
My code looks likes this.
Main Code:
Fill Vertex Array
OurVertices = (VERTEX*)malloc(PointCount * sizeof(VERTEX));
for (int i = 0; i < PointCount; i++)
{
OurVertices[i] = { RandOm() * i,RandOm() * i ,1.0f ,{abs(RandOm()),abs(RandOm()),abs(RandOm()),1.0f} };
}
RandOm():Random value from 0.0f to 1.0f. Multiplied by i to get some realistic world coordinates.
Vertex Buffer
D3D11_BUFFER_DESC bd;
ZeroMemory(&bd, sizeof(bd));
bd.Usage = D3D11_USAGE_DYNAMIC;
bd.ByteWidth = sizeof(VERTEX)*PointCount;
bd.BindFlags = D3D11_BIND_VERTEX_BUFFER;
bd.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
dev->CreateBuffer(&bd, NULL, &pVBuffer);
VERTEX: struct VERTEX { FLOAT X, Y, Z; FLOAT Color[4]; };
Mapping
devcon->Map(pVBuffer, NULL, D3D11_MAP_WRITE_DISCARD, NULL, &ms);
memcpy(ms.pData, OurVertices, PointCount* sizeof(VERTEX));
devcon->Unmap(pVBuffer, NULL);
devcon: ID3D11DeviceContext
Render
float color[4] = { 0.0f, 0.0f, 0.0f, 1.0f };
devcon->ClearRenderTargetView(backbuffer,color);
UINT stride = sizeof(VERTEX);
UINT offset = 0;
devcon->IASetVertexBuffers(0, 1, &pVBuffer, &stride, &offset);
devcon->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
devcon->Draw(PointCount, 0);
swapchain->Present(0, 0);
Shader
VOut VShader(float4 position : POSITION, float4 color : COLOR) {
VOut output;
output.position = mul(world, position);
output.color = color;
return output; }
float4 PShader(float4 position : SV_POSITION, float4 color : COLOR) : SV_TARGET {
return color; }
XMMatrixOrthographicOffCenterLH calculated applied using CBuffer as can be seen on shader world variable. I think Orthographic projection works fine from my visual observations. There are neither CPU or GPU exceptions. And no warnings.
Rasterizer
D3D11_RASTERIZER_DESC RasterDesc = {};
RasterDesc.FillMode = D3D11_FILL_SOLID;
RasterDesc.CullMode = D3D11_CULL_NONE;
RasterDesc.DepthClipEnable = TRUE;
ID3D11RasterizerState* WireFrame=NULL;
dev->CreateRasterizerState(&RasterDesc, &WireFrame);
devcon->RSSetState(WireFrame);
Orthographic Projection
ID3D11Buffer* g_pConstantBuffer11 = NULL;
cbuff.world = XMMatrixOrthographicOffCenterLH(SceneY - (ViewPortWidth / 2)
* SceneZoom, SceneY + (ViewPortWidth / 2) * SceneZoom,
SceneX - (ViewPortHeight / 2) * SceneZoom, SceneX + (ViewPortHeight /
2) * SceneZoom,-10000.0f, 10000.0f);
D3D11_BUFFER_DESC cbDesc;
cbDesc.ByteWidth = sizeof(CBUFFER);
cbDesc.Usage = D3D11_USAGE_DYNAMIC;
cbDesc.BindFlags = D3D11_BIND_CONSTANT_BUFFER;
cbDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
cbDesc.MiscFlags = 0;
cbDesc.StructureByteStride = 0;
D3D11_SUBRESOURCE_DATA InitData;
InitData.pSysMem = &cbuff;
InitData.SysMemPitch = 0;
InitData.SysMemSlicePitch = 0;
dev->CreateBuffer(&cbDesc, &InitData,&g_pConstantBuffer11);
devcon->VSSetConstantBuffers(0, 1, &g_pConstantBuffer11);

DirectX11 Offscreen rendering: output image is flipepd

I'm making my graphics engine. I want to have the ability to write it on C++, but create UI for an editor on C#. So with some defines, I disable rendering to a window and trying to do the off-screen rendering to pass then data to c# but I have a problem, I'm understanding why it's happening (it's how DirectX create textures and stores them) but have no clue how to fix it. So here are the results.
Imaged rendered to the window:
Image rendered to bmp (flipped):
On the first image, all looks good, and on the second as you can see I have flipped Y, and maybe X (not sure) coordinates. And for maybe useful information, I represent normals as color.
Here is my code
Vertex Buffer
cbuffer Transformation : register(b0) {
matrix transformation;
};
cbuffer ViewProjection : register(b1) {
matrix projection;
matrix view;
};
struct VS_OUT {
float2 texcoord : TextureCoordinate;
float3 normal : Normal;
float4 position : SV_Position;
};
VS_OUT main(float3 position : Position, float3 normal : Normal, float2 texcoord : TextureCoordinate) {
matrix tView = transpose(view);
matrix tProjection = transpose(projection);
matrix tTransformation = transpose(transformation);
matrix MVP = mul(tTransformation, mul(tView, tProjection));
VS_OUT result;
result.position = mul(float4(position, 1.0f), MVP);
result.texcoord = texcoord;
result.normal = normal;
return result;
}
Pixel buffer
float4 main(float2 texcoord : TextureCoordinate, float3 normal : Normal) : SV_Target
{
float3 color = (normal + 1) * 0.5f;
return float4(color.rgb, 1.0f);
}
DirectX code for offscreen rendering initialization
D3D_FEATURE_LEVEL FeatureLevels[] = {
D3D_FEATURE_LEVEL_11_1,
D3D_FEATURE_LEVEL_11_0,
D3D_FEATURE_LEVEL_10_1,
D3D_FEATURE_LEVEL_10_0,
D3D_FEATURE_LEVEL_9_3,
D3D_FEATURE_LEVEL_9_2,
D3D_FEATURE_LEVEL_9_1
};
UINT deviceFlags = 0;
#if defined(DEBUG) || defined(_DEBUG)
deviceFlags |= D3D11_CREATE_DEVICE_DEBUG;
#endif
DirectX11Call(D3D11CreateDevice(
nullptr,
D3D_DRIVER_TYPE_HARDWARE,
nullptr,
deviceFlags,
FeatureLevels,
ARRAYSIZE(FeatureLevels),
D3D11_SDK_VERSION,
&m_Device,
&m_FeatureLevel,
&m_DeviceContext
))
D3D11_TEXTURE2D_DESC renderingDescription = {};
renderingDescription.Width = width;
renderingDescription.Height = height;
renderingDescription.ArraySize = 1;
renderingDescription.SampleDesc.Count = 1;
renderingDescription.Usage = D3D11_USAGE_DEFAULT;
renderingDescription.BindFlags = D3D11_BIND_RENDER_TARGET;
renderingDescription.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
m_Device->CreateTexture2D(&renderingDescription, nullptr, &m_Target);
renderingDescription.BindFlags = 0;
renderingDescription.Usage = D3D11_USAGE_STAGING;
renderingDescription.CPUAccessFlags = D3D11_CPU_ACCESS_READ;
m_Device->CreateTexture2D(&renderingDescription, nullptr, &m_Output);
DirectX11Call(m_Device->CreateRenderTargetView(m_Target.Get(), nullptr, &m_RenderTargetView))
D3D11_DEPTH_STENCIL_DESC depthStencilStateDescription = {};
depthStencilStateDescription.DepthEnable = TRUE;
depthStencilStateDescription.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL;
depthStencilStateDescription.DepthFunc = D3D11_COMPARISON_LESS;
Microsoft::WRL::ComPtr<ID3D11DepthStencilState> depthStencilState;
DirectX11Call(m_Device->CreateDepthStencilState(&depthStencilStateDescription, &depthStencilState))
m_DeviceContext->OMSetDepthStencilState(depthStencilState.Get(), 0);
D3D11_TEXTURE2D_DESC depthStencilDescription = {};
depthStencilDescription.Width = width;
depthStencilDescription.Height = height;
depthStencilDescription.MipLevels = 1;
depthStencilDescription.ArraySize = 1;
depthStencilDescription.Format = DXGI_FORMAT_D32_FLOAT;
depthStencilDescription.SampleDesc.Count = 1;
depthStencilDescription.SampleDesc.Quality = 0;
depthStencilDescription.Usage = D3D11_USAGE_DEFAULT;
depthStencilDescription.BindFlags = D3D11_BIND_DEPTH_STENCIL;
depthStencilDescription.CPUAccessFlags = 0;
depthStencilDescription.MiscFlags = 0;
DirectX11Call(m_Device->CreateTexture2D(&depthStencilDescription, nullptr, &m_DepthStencilBuffer))
D3D11_DEPTH_STENCIL_VIEW_DESC depthStencilViewDescription = {};
depthStencilViewDescription.Format = DXGI_FORMAT_D32_FLOAT;
depthStencilViewDescription.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2D;
depthStencilViewDescription.Texture2D.MipSlice = 0;
DirectX11Call(m_Device->CreateDepthStencilView(m_DepthStencilBuffer.Get(), &depthStencilViewDescription, &m_DepthStencilView))
m_DeviceContext->OMSetRenderTargets(1, m_RenderTargetView.GetAddressOf(), m_DepthStencilView.Get());
m_DeviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
D3D11_VIEWPORT viewPort;
viewPort.TopLeftX = 0.0f;
viewPort.TopLeftY = 0.0f;
viewPort.Width = static_cast<float>(width);
viewPort.Height = static_cast<float>(height);
viewPort.MinDepth = 0.0f;
viewPort.MaxDepth = 1.0f;
m_DeviceContext->RSSetViewports(1, &viewPort);
D3D11_RASTERIZER_DESC rasterizerDescription = {};
rasterizerDescription.FillMode = D3D11_FILL_SOLID;
rasterizerDescription.CullMode = D3D11_CULL_FRONT;
Microsoft::WRL::ComPtr<ID3D11RasterizerState> rasterizerState;
DirectX11Call(m_Device->CreateRasterizerState(&rasterizerDescription, &rasterizerState))
m_DeviceContext->RSSetState(rasterizerState.Get());
Code for drawing to texture
m_DeviceContext->Flush();
m_DeviceContext->CopyResource(m_Output.Get(), m_Target.Get());
static const UINT resource_id = D3D11CalcSubresource(0, 0, 0);
m_DeviceContext->Map(m_Output.Get(), resource_id, D3D11_MAP_READ, 0, &m_OutputResource);
The difference between rendering to a window is that I'm also creating swapchain. So my question is how can I fix it (flipping on CPU bad solution and it may cause problems with shaders like in this example where I have different color for sphere)

C++ DirectX11 Texture On Terrain Not Rendering Properly

I am developing a game engine using the Rastertek tutorials.
My problem is that the terrain texture isn't loading properly.
Pixel Shader:
Texture2D shaderTexture;
SamplerState SampleType;
cbuffer LightBuffer
{
float4 ambientColor;
float4 diffuseColor;
float3 lightDirection;
float padding;
};
//////////////
// TYPEDEFS //
//////////////
struct PixelInputType
{
float4 position : SV_POSITION;
float2 tex : TEXCOORD0;
float3 normal : NORMAL;
};
////////////////////////////////////////////////////////////////////////////////
// Pixel Shader
////////////////////////////////////////////////////////////////////////////////
float4 TerrainPixelShader(PixelInputType input) : SV_TARGET
{
float4 textureColor;
float3 lightDir;
float lightIntensity;
float4 color;
// Sample the pixel color from the texture using the sampler at this texture coordinate location.
textureColor = shaderTexture.Sample(SampleType, input.tex);
// Set the default output color to the ambient light value for all pixels.
color = ambientColor;
// Invert the light direction for calculations.
lightDir = -lightDirection;
// Calculate the amount of light on this pixel.
lightIntensity = saturate(dot(input.normal, lightDir));
if(lightIntensity > 0.0f)
{
// Determine the final diffuse color based on the diffuse color and the amount of light intensity.
color += (diffuseColor * lightIntensity);
}
// Saturate the final light color.
color = saturate(color);
// Multiply the texture pixel and the final light color to get the result.
color = color * textureColor;
Vertex Shader:
cbuffer MatrixBuffer
{
matrix worldMatrix;
matrix viewMatrix;
matrix projectionMatrix;
};
//////////////
// TYPEDEFS //
//////////////
struct VertexInputType
{
float4 position : POSITION;
float2 tex : TEXCOORD0;
float3 normal : NORMAL;
};
struct PixelInputType
{
float4 position : SV_POSITION;
float2 tex : TEXCOORD0;
float3 normal : NORMAL;
};
////////////////////////////////////////////////////////////////////////////////
// Vertex Shader
////////////////////////////////////////////////////////////////////////////////
PixelInputType TerrainVertexShader(VertexInputType input)
{
PixelInputType output;
// Change the position vector to be 4 units for proper matrix calculations.
input.position.w = 1.0f;
// Calculate the position of the vertex against the world, view, and projection matrices.
output.position = mul(input.position, worldMatrix);
output.position = mul(output.position, viewMatrix);
output.position = mul(output.position, projectionMatrix);
// Store the texture coordinates for the pixel shader.
output.tex = input.tex;
// Calculate the normal vector against the world matrix only.
output.normal = mul(input.normal, (float3x3)worldMatrix);
// Normalize the normal vector.
output.normal = normalize(output.normal);
return output;
}
Terrain Shader Class:
bool TerrainShaderClass::SetShaderParameters(ID3D11DeviceContext* deviceContext, D3DXMATRIX world, D3DXMATRIX view,
D3DXMATRIX projection, D3DXVECTOR4 ambientColor, D3DXVECTOR4 diffuseColor, D3DXVECTOR3 lightDirection,
ID3D11ShaderResourceView* texture)
{
HRESULT result;
D3D11_MAPPED_SUBRESOURCE mappedResource;
unsigned int bufferNumber;
MatrixBufferType* matrixData;
LightBufferType* lightData;
D3DXMatrixTranspose(&world, &world);
D3DXMatrixTranspose(&view, &view);
D3DXMatrixTranspose(&projection, &projection);
result = deviceContext->Map(m_matrixBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource);
if (FAILED(result))
{
return false;
}
matrixData = (MatrixBufferType*)mappedResource.pData;
matrixData->world = world;
matrixData->view = view;
matrixData->projection = projection;
deviceContext->Unmap(m_matrixBuffer, 0);
bufferNumber = 0;
deviceContext->VSSetConstantBuffers(bufferNumber, 1, &m_matrixBuffer);
deviceContext->Map(m_lightBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource);
lightData = (LightBufferType*)mappedResource.pData;
lightData->ambientColor = ambientColor;
lightData->diffuseColor = diffuseColor;
lightData->lightDirection = lightDirection;
lightData->padding = 0.0f;
deviceContext->Unmap(m_lightBuffer, 0);
bufferNumber = 0;
deviceContext->PSSetConstantBuffers(bufferNumber, 1, &m_lightBuffer);
deviceContext->PSSetShaderResources(0, 1, &texture);
return true;
}
void TerrainShaderClass::OutputShaderErrorMessage(ID3D10Blob* errorMessage, HWND hwnd, LPCSTR shaderFileName)
{
char* compileErrors = (char*)(errorMessage->GetBufferPointer());
unsigned long bufferSize = errorMessage->GetBufferSize();
ofstream fout;
fout.open("shader-error.txt");
for (unsigned long i = 0; i < bufferSize; i++)
{
fout << compileErrors[i];
}
fout.close();
errorMessage->Release();
errorMessage = nullptr;
MessageBox(hwnd, "Error compiling shader. Check shader-error.txt for message.", shaderFileName, MB_OK);
}
void TerrainShaderClass::RenderShader(ID3D11DeviceContext* deviceContext, int indexCount)
{
deviceContext->IASetInputLayout(m_layout);
deviceContext->VSSetShader(m_vertexShader, NULL, 0);
deviceContext->PSSetShader(m_pixelShader, NULL, 0);
deviceContext->PSSetSamplers(0, 1, &m_samplerState);
deviceContext->DrawIndexed(indexCount, 0, 0);
}
bool TerrainShaderClass::InitializeShader(ID3D11Device* device, HWND hwnd, LPCSTR vsFileName, LPCSTR psFileName)
{
HRESULT result;
ID3D10Blob* errorMessage = nullptr;
ID3D10Blob* vertexShaderBuffer = nullptr;
ID3D10Blob* pixelShaderBuffer = nullptr;
D3D11_INPUT_ELEMENT_DESC polygonLayout[3];
unsigned int numElements;
D3D11_SAMPLER_DESC samplerDesc;
D3D11_BUFFER_DESC matrixBufferDesc;
D3D11_BUFFER_DESC lightBufferDesc;
result = D3DX11CompileFromFile(vsFileName, NULL, NULL, "TerrainVertexShader", "vs_5_0", D3D10_SHADER_ENABLE_STRICTNESS,
0, NULL, &vertexShaderBuffer, &errorMessage, NULL);
if (FAILED(result))
{
if (errorMessage)
{
OutputShaderErrorMessage(errorMessage, hwnd, vsFileName);
}
else
{
MessageBox(hwnd, "Missing Shader File", vsFileName, MB_OK);
}
return false;
}
result = D3DX11CompileFromFile(psFileName, NULL, NULL, "TerrainPixelShader", "ps_5_0", D3D10_SHADER_ENABLE_STRICTNESS,
0, NULL, &pixelShaderBuffer, &errorMessage, NULL);
if (FAILED(result))
{
if (errorMessage)
{
OutputShaderErrorMessage(errorMessage, hwnd, psFileName);
}
else
{
MessageBox(hwnd, "Missing Shader File", psFileName, MB_OK);
}
return false;
}
result = device->CreateVertexShader(vertexShaderBuffer->GetBufferPointer(), vertexShaderBuffer->GetBufferSize(), NULL, &m_vertexShader);
if (FAILED(result))
{
return false;
}
result = device->CreatePixelShader(pixelShaderBuffer->GetBufferPointer(), pixelShaderBuffer->GetBufferSize(), NULL, &m_pixelShader);
if (FAILED(result))
{
return false;
}
polygonLayout[0].SemanticName = "POSITION";
polygonLayout[0].SemanticIndex = 0;
polygonLayout[0].Format = DXGI_FORMAT_R32G32B32_FLOAT;
polygonLayout[0].InputSlot = 0;
polygonLayout[0].AlignedByteOffset = 0;
polygonLayout[0].InputSlotClass = D3D11_INPUT_PER_VERTEX_DATA;
polygonLayout[0].InstanceDataStepRate = 0;
polygonLayout[1].SemanticName = "TEXCOORD";
polygonLayout[1].SemanticIndex = 0;
polygonLayout[1].Format = DXGI_FORMAT_R32G32_FLOAT;
polygonLayout[1].InputSlot = 0;
polygonLayout[1].AlignedByteOffset = D3D11_APPEND_ALIGNED_ELEMENT;
polygonLayout[1].InputSlotClass = D3D11_INPUT_PER_VERTEX_DATA;
polygonLayout[1].InstanceDataStepRate = 0;
polygonLayout[2].SemanticName = "NORMAL";
polygonLayout[2].SemanticIndex = 0;
polygonLayout[2].Format = DXGI_FORMAT_R32G32B32_FLOAT;
polygonLayout[2].InputSlot = 0;
polygonLayout[2].AlignedByteOffset = D3D11_APPEND_ALIGNED_ELEMENT;
polygonLayout[2].InputSlotClass = D3D11_INPUT_PER_VERTEX_DATA;
polygonLayout[2].InstanceDataStepRate = 0;
numElements = sizeof(polygonLayout) / sizeof(polygonLayout[0]);
result = device->CreateInputLayout(polygonLayout, numElements, vertexShaderBuffer->GetBufferPointer(), vertexShaderBuffer->GetBufferSize(), &m_layout);
if (FAILED(result))
{
return false;
}
vertexShaderBuffer->Release();
vertexShaderBuffer = nullptr;
pixelShaderBuffer->Release();
pixelShaderBuffer = nullptr;
samplerDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR;
samplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.MipLODBias = 0.0f;
samplerDesc.MaxAnisotropy = 1;
samplerDesc.ComparisonFunc = D3D11_COMPARISON_ALWAYS;
samplerDesc.BorderColor[0] = 0;
samplerDesc.BorderColor[1] = 0;
samplerDesc.BorderColor[2] = 0;
samplerDesc.BorderColor[3] = 0;
samplerDesc.MinLOD = 0;
samplerDesc.MaxLOD = D3D11_FLOAT32_MAX;
result = device->CreateSamplerState(&samplerDesc, &m_samplerState);
if (FAILED(result))
{
return false;
}
matrixBufferDesc.Usage = D3D11_USAGE_DYNAMIC;
matrixBufferDesc.ByteWidth = sizeof(MatrixBufferType);
matrixBufferDesc.BindFlags = D3D11_BIND_CONSTANT_BUFFER;
matrixBufferDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
matrixBufferDesc.MiscFlags = 0;
matrixBufferDesc.StructureByteStride = 0;
result = device->CreateBuffer(&matrixBufferDesc, NULL, &m_matrixBuffer);
if (FAILED(result))
{
return false;
}
//ByteWidth must be a multiple of 16 if using D3D11_BIND_CONSTANT_BUFFER or CreateBuffer will fail.
lightBufferDesc.Usage = D3D11_USAGE_DYNAMIC;
lightBufferDesc.ByteWidth = sizeof(LightBufferType);
lightBufferDesc.BindFlags = D3D11_BIND_CONSTANT_BUFFER;
lightBufferDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
lightBufferDesc.MiscFlags = 0;
lightBufferDesc.StructureByteStride = 0;
device->CreateBuffer(&lightBufferDesc, NULL, &m_lightBuffer);
if (FAILED(result))
{
return false;
}
return true;
}
The texture is supposed to look like displayed on the link, but instead looks like really weird (can't seem to be able to take a screen shot, will add if possible).
I tried looking into other questions here, but none solved the issue.
I am still fairly new to DX11, so any help is much appreciated.
Edit: Here is a screenshot (left side: supposed, right side: my game)
Im looking at your screenshot, and your not only is your texture not rendering correctly, but your normals aren't either otherwise you would have a diffuse at least shading it properly. I would summise, that although your stride is correct, what you are pulling out of the buffer for UV and Normal is not aligned properly. My first thoughts.

HLSL Vertex Shader gets wrong input

I am currently working on an project but my problem is that my Vertex shader gets the wrong data, so my position values aren't any longer the same as i set them at the beginning.
So this is where I define the position/anchor of my Sprite
struct SpriteVertex
{
DirectX::XMFLOAT3 position;
float radius;
int textureIndex;
};
//Sprite renderer
vector<SpriteVertex> sprite_vertices;
SpriteVertex current;
current.position.x = 0;
current.position.y = 0;
current.position.z = 0;
current.radius = 100;
current.textureIndex = 0;
sprite_vertices.push_back(current);
g_SpriteRenderer->renderSprites(pd3dImmediateContext, sprite_vertices, g_camera);
So in my SpriteRenderer Class I have the create method where I set up the Input layout and create an empty vertex buffer.
HRESULT SpriteRenderer::create(ID3D11Device* pDevice)
{
cout << "Spriterender Create has been called" << endl;
HRESULT hr;
D3D11_BUFFER_DESC bd;
ZeroMemory(&bd, sizeof(bd));
bd.Usage = D3D11_USAGE_DEFAULT;
bd.ByteWidth = 1024 * sizeof(SpriteVertex);
bd.BindFlags = D3D11_BIND_SHADER_RESOURCE;
bd.CPUAccessFlags = 0;
bd.MiscFlags = 0;
V(pDevice->CreateBuffer(&bd , nullptr, &m_pVertexBuffer));
const D3D11_INPUT_ELEMENT_DESC layout[] ={
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "RADIUS", 0, DXGI_FORMAT_R32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "TEXTUREINDEX",0,DXGI_FORMAT_R32_SINT,0,D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 }
};
UINT numEements = sizeof(layout) / sizeof(layout[0]);
D3DX11_PASS_DESC pd;
V_RETURN(m_pEffect->GetTechniqueByName("Render")->GetPassByName("SpritePass")->GetDesc(&pd));
V_RETURN(pDevice->CreateInputLayout(layout, numEements, pd.pIAInputSignature, pd.IAInputSignatureSize, &m_pInputLayout));
return S_OK;
}
And I have the render method which fills the buffer and is supposed to render it with the shader I coded:
void SpriteRenderer::renderSprites(ID3D11DeviceContext* context, const std::vector<SpriteVertex>& sprites, const CFirstPersonCamera& camera)
{
//cout << "SpriterenderrenderSprites has been called" << endl;
D3D11_BOX box;
box.left = 0; box.right = sprites.size()*sizeof(SpriteVertex);
box.top = 0; box.bottom = 1;
box.front = 0; box.back = 1;
context->UpdateSubresource(m_pVertexBuffer,0,&box,&sprites[0],0,0);
const UINT size = sizeof(SpriteVertex);
context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_POINTLIST);
context->IASetInputLayout(m_pInputLayout);
context->IASetVertexBuffers(0, 0, &m_pVertexBuffer, &size, nullptr);
//setting shader resouirces
DirectX::XMMATRIX worldviewProj =camera.GetViewMatrix()*camera.GetProjMatrix();
m_pEffect->GetVariableByName("g_ViewProjection")->AsMatrix()->SetMatrix(( float* ) &worldviewProj);
m_pEffect->GetVariableByName("g_cameraRight")->AsVector()->SetFloatVector((float*) &camera.GetWorldRight());
m_pEffect->GetVariableByName("g_cameraUP")->AsVector()->SetFloatVector((float*)&camera.GetWorldUp());
m_pEffect->GetTechniqueByName("Render")->GetPassByName("SpritePass")->Apply( 0,context);
context->Draw(size,0);
}
So my big problem is that when I debug the Shaders that my inital Position radius and so on aren't even close to what I want:
Debug VS
I'm trying to fix this now for ever any help would be really appreciated.
EDIT: HLSL Code might help ;=)
//--------------------------------------------------------------------------------------
// Shader resources
//--------------------------------------------------------------------------------------
//--------------------------------------------------------------------------------------
// Constant buffers
//--------------------------------------------------------------------------------------
cbuffer cbCOnstant
{
matrix g_ViewProjection;
float4 g_cameraRight;
float4 g_cameraUP;
};
//--------------------------------------------------------------------------------------
// Structs
//--------------------------------------------------------------------------------------
struct SpriteVertex
{
float3 POSITION : POSITION;
float RADIUS: RADIUS;
int TEXIN : TEXTUREINDEX;
};
struct PSVertex
{
float4 POSITION : SV_Position;
int TEXIN : TEXTUREINDEX;
};
//--------------------------------------------------------------------------------------
// Rasterizer states
//--------------------------------------------------------------------------------------
RasterizerState rsCullNone
{
CullMode = None;
};
//--------------------------------------------------------------------------------------
// DepthStates
//--------------------------------------------------------------------------------------
DepthStencilState EnableDepth
{
DepthEnable = TRUE;
DepthWriteMask = ALL;
DepthFunc = LESS_EQUAL;
};
BlendState NoBlending
{
AlphaToCoverageEnable = FALSE;
BlendEnable[0] = FALSE;
};
//--------------------------------------------------------------------------------------
// Shaders
//--------------------------------------------------------------------------------------
SpriteVertex DummyVS(SpriteVertex Input)
{
return Input;
}
[maxvertexcount(4)]
void SpriteGS(point SpriteVertex vertex[1], inout TriangleStream<PSVertex> stream){
PSVertex input;
input.TEXIN = vertex[0].TEXIN;
//bottom left
input.POSITION = mul(float4(vertex[0].POSITION,1) - vertex[0].RADIUS * g_cameraRight - vertex[0].RADIUS * g_cameraUP, g_ViewProjection);
stream.Append(input);
//top left
input.POSITION = mul(float4(vertex[0].POSITION,1) - vertex[0].RADIUS * g_cameraRight + vertex[0].RADIUS * g_cameraUP, g_ViewProjection);
stream.Append(input);
//top right
input.POSITION = mul(float4(vertex[0].POSITION,1) + vertex[0].RADIUS * g_cameraRight + vertex[0].RADIUS * g_cameraUP, g_ViewProjection);
stream.Append(input);
//bot right
input.POSITION = mul(float4(vertex[0].POSITION,1) + vertex[0].RADIUS * g_cameraRight - vertex[0].RADIUS * g_cameraUP, g_ViewProjection);
stream.Append(input);
}
float4 DummyPS(PSVertex input) : SV_Target0
{
return float4(1, 1, 0, 1);
}
//--------------------------------------------------------------------------------------
// Techniques
//--------------------------------------------------------------------------------------
technique11 Render
{
pass SpritePass
{
SetVertexShader(CompileShader(vs_5_0, DummyVS()));
SetGeometryShader(CompileShader(gs_5_0, SpriteGS()));
SetPixelShader(CompileShader(ps_5_0, DummyPS()));
SetRasterizerState(rsCullNone);
SetDepthStencilState(EnableDepth,0);
SetBlendState(NoBlending, float4(0.0f, 0.0f, 0.0f, 0.0f), 0xFFFFFFFF);
}
}
Your binding of the vertex buffer below is wrong :
context->IASetVertexBuffers(0, 0, &m_pVertexBuffer, &size, nullptr);
The seconde arguments of ID3D11DeviceContext::IASetVertexBuffers is the number of buffer to bind, it has to be one here, and offsets has to be valid :
UINT offset = 0;
context->IASetVertexBuffers(0, 1, &m_pVertexBuffer, &size, &offset);
As a general advice, you should turn on the debug device at creation with the flag D3D11_CREATE_DEVICE_DEBUG and look for any message in the output. On windows 10, you may have to install it first following Microsoft instructions installing the debug device

Can't create vertex buffer

I have a Windows Phone 8 C#/XAML project with DirectX component. I'm trying to rendering some particles. I create a vertex buffer, which I saw it go into the function to create, but when it gets to an update vertex buffer, the buffer is NULL. I have not released the buffers yet. Do you know why this will happen? Does any of the output messages help? Thanks.
Printouts and errors on my Output Window:
'TaskHost.exe' (Win32): Loaded '\Device\HarddiskVolume4\Windows\System32\d3d11_1SDKLayers.dll'. Cannot find or open the PDB file.
D3D11 WARNING: ID3D11Texture2D::SetPrivateData: Existing private data of same name with different size found! [ STATE_SETTING WARNING #55: SETPRIVATEDATA_CHANGINGPARAMS]
Create vertex buffer
D3D11 WARNING: ID3D11DeviceContext::DrawIndexed: The Pixel Shader unit expects a Sampler to be set at Slot 0, but none is bound. This is perfectly valid, as a NULL Sampler maps to default Sampler state. However, the developer may not want to rely on the defaults. [ EXECUTION WARNING #352: DEVICE_DRAW_SAMPLER_NOT_SET]
The thread 0xb64 has exited with code 0 (0x0).
m_vertexBuffer is null
D3D11 WARNING: ID3D11DeviceContext::DrawIndexed: The Pixel Shader unit expects a Sampler to be set at Slot 0, but none is bound. This is perfectly valid, as a NULL Sampler maps to default Sampler state. However, the developer may not want to rely on the defaults. [ EXECUTION WARNING #352: DEVICE_DRAW_SAMPLER_NOT_SET]
m_vertexBuffer is null
In CreateDeviceResources, I call CreateVertexShader, CreateInputLayout, CreatePixelShader, CreateBuffer. Then I get to creating the sampler and vertex buffer, code below:
auto createCubeTask = (createPSTask && createVSTask).then([this] () {
// Create a texture sampler state description.
D3D11_SAMPLER_DESC samplerDesc;
samplerDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR;
samplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.MipLODBias = 0.0f;
samplerDesc.MaxAnisotropy = 1;
samplerDesc.ComparisonFunc = D3D11_COMPARISON_ALWAYS;
samplerDesc.BorderColor[0] = 0;
samplerDesc.BorderColor[1] = 0;
samplerDesc.BorderColor[2] = 0;
samplerDesc.BorderColor[3] = 0;
samplerDesc.MinLOD = 0;
samplerDesc.MaxLOD = D3D11_FLOAT32_MAX;
// Create the texture sampler state.
HRESULT result = m_d3dDevice->CreateSamplerState(&samplerDesc, &m_sampleState);
if(FAILED(result))
{
OutputDebugString(L"Can't CreateSamplerState");
}
//InitParticleSystem();
// Set the maximum number of vertices in the vertex array.
m_vertexCount = m_maxParticles * 6;
// Set the maximum number of indices in the index array.
m_indexCount = m_vertexCount;
// Create the vertex array for the particles that will be rendered.
m_vertices = new VertexType[m_vertexCount];
if(!m_vertices)
{
OutputDebugString(L"Can't create the vertex array for the particles that will be rendered.");
}
else
{
// Initialize vertex array to zeros at first.
int sizeOfVertexType = sizeof(VertexType);
int totalSizeVertex = sizeOfVertexType * m_vertexCount;
memset(m_vertices, 0, totalSizeVertex);
D3D11_SUBRESOURCE_DATA vertexBufferData = {0};
vertexBufferData.pSysMem = m_vertices;
vertexBufferData.SysMemPitch = 0;
vertexBufferData.SysMemSlicePitch = 0;
int sizeOfMVertices = sizeof(m_vertices);
CD3D11_BUFFER_DESC vertexBufferDesc(
totalSizeVertex, // byteWidth
D3D11_BIND_VERTEX_BUFFER, // bindFlags
D3D11_USAGE_DYNAMIC, // D3D11_USAGE usage = D3D11_USAGE_DEFAULT
D3D11_CPU_ACCESS_WRITE, // cpuaccessFlags
0, // miscFlags
0 // structureByteStride
);
OutputDebugString(L"Create vertex buffer\n");
DX::ThrowIfFailed(
m_d3dDevice->CreateBuffer(
&vertexBufferDesc,
&vertexBufferData,
&m_vertexBuffer
)
);
}
unsigned long* indices = new unsigned long[m_indexCount];
if(!indices)
{
OutputDebugString(L"Can't create the index array.");
}
else
{
// Initialize the index array.
for(int i=0; i<m_indexCount; i++)
{
indices[i] = i;
}
// Set up the description of the static index buffer.
// Create the index array.
D3D11_BUFFER_DESC indexBufferDesc;
indexBufferDesc.Usage = D3D11_USAGE_DEFAULT;
indexBufferDesc.ByteWidth = sizeof(unsigned long) * m_indexCount;
indexBufferDesc.BindFlags = D3D11_BIND_INDEX_BUFFER;
indexBufferDesc.CPUAccessFlags = 0;
indexBufferDesc.MiscFlags = 0;
indexBufferDesc.StructureByteStride = 0;
// Give the subresource structure a pointer to the index data.
D3D11_SUBRESOURCE_DATA indexData;
indexData.pSysMem = indices;
indexData.SysMemPitch = 0;
indexData.SysMemSlicePitch = 0;
// Create the index buffer.
DX::ThrowIfFailed(
m_d3dDevice->CreateBuffer(
&indexBufferDesc,
&indexData,
&m_indexBuffer
)
);
// Release the index array since it is no longer needed.
delete [] indices;
indices = 0;
}
});
createCubeTask.then([this] () {
m_loadingComplete = true;
});
}
My vertex is position, tex, and color:
struct VertexType
{
DirectX::XMFLOAT3 position;
DirectX::XMFLOAT2 texture;
DirectX::XMFLOAT4 color;
};
Microsoft::WRL::ComPtr<ID3D11SamplerState> m_sampleState;
VertexType* m_vertices;
Microsoft::WRL::ComPtr<ID3D11Buffer> m_vertexBuffer;
const D3D11_INPUT_ELEMENT_DESC vertexDesc[] =
{
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "COLOR", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 },
};
VertexShader.HLSL:
cbuffer ModelViewProjectionConstantBuffer : register(b0)
{
matrix model;
matrix view;
matrix projection;
};
struct VertexInputType
{
float4 position : POSITION;
float2 tex : TEXCOORD0;
float4 color : COLOR;
};
struct PixelInputType
{
float4 position : SV_POSITION;
float2 tex : TEXCOORD0;
float4 color : COLOR;
};
PixelInputType main(VertexInputType input)
{
PixelInputType output;
// Change the position vector to be 4 units for proper matrix calculations.
input.position.w = 1.0f;
// Calculate the position of the vertex against the world, view, and projection matrices.
output.position = mul(input.position, model);
output.position = mul(output.position, view);
output.position = mul(output.position, projection);
// Store the texture coordinates for the pixel shader.
output.tex = input.tex;
// Store the particle color for the pixel shader.
output.color = input.color;
return output;
}
After the call to: m_d3dDevice->CreateBuffer(&vertexBufferDesc, &vertexBufferData, &m_vertexBuffer) the m_vertexBuffer is not null.
But when I get to my Update() function, m_vertexBuffer is NULL!
D3D11_MAPPED_SUBRESOURCE mappedResource;
if (m_vertexBuffer == nullptr)
{
OutputDebugString(L"m_vertexBuffer is null\n");
}
else
{
// Lock the vertex buffer.
DX::ThrowIfFailed(m_d3dContext->Map(m_vertexBuffer.Get(), 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource));
// Get a pointer to the data in the vertex buffer.
VertexType * verticesPtr = (VertexType*)mappedResource.pData;
//// Copy the data into the vertex buffer.
int sizeOfVertices = sizeof(VertexType) * m_vertexCount;
memcpy(verticesPtr, (void*)m_vertices, sizeOfVertices);
//// Unlock the vertex buffer.
m_d3dContext->Unmap(m_vertexBuffer.Get(), 0);
}
About the sampler, you need to assign it to your pixelshader.
m_d3dContext.Get()->PSSetSamplers(0,1,&m_sampleState);
I was making the incorrect call on m_vertexBuffer when I go Render it.
This fixes the error with the m_vertexBuffer being null.
Previous:
m_d3dContext.Get()->IASetVertexBuffers(0, 1, &m_vertexBuffer, &stride, &offset);
Fix:
m_d3dContext.Get()->IASetVertexBuffers(0, 1, m_vertexBuffer.GetAddressOf(), &stride, &offset);
It doesn't fix the error with the sampler states though:
DrawIndexed: The Pixel Shader unit expects a Sampler to be set at Slot
0