DirectX11 Mapping/Unmapping has no effect on rendered geometry - c++

I have a simple rendering program I just made, but it refuses to draw anything but what I set the initial vertices to. Even if I call Map() and Unmap(), the geometry doesn't seem to change. I have a feeling it has to do with Map() and Unmap(), but I'm not sure. Right now, my initial vertex data consists of one triangle. Then I map the vertex buffer with a new set of vertices which consists of two triangles, but they aren't rendered. Only one triangle is rendered even though I pass in 6 for the vertex count in the draw function. Here is my setup code:
VertexData vertexData[] =
{
{0.0f,0.0f,0.0f,0.0f,0.0f,1.0f,0.0f,0.0f},
{0.0f,1.0f,0.0f,0.0f,0.0f,1.0f,0.0f,0.0f},
{1.0f,1.0f,0.0f,0.0f,0.0f,1.0f,0.0f,0.0f}
};
D3D11_BUFFER_DESC bd;
ZeroMemory(&bd,sizeof(bd));
bd.Usage = D3D11_USAGE_DYNAMIC;
bd.ByteWidth = sizeof(VertexData)*256;
bd.BindFlags = D3D11_BIND_VERTEX_BUFFER;
bd.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
D3D11_SUBRESOURCE_DATA vertexBufferData;
vertexBufferData.pSysMem = vertexData;
vertexBufferData.SysMemPitch = 0;
vertexBufferData.SysMemSlicePitch = 0;
_device->CreateBuffer(&bd, &vertexBufferData, &_renderBuffer);
Mapping function:
data is a vector containing six VertexData structs, for six vertices
D3D11_MAPPED_SUBRESOURCE ms;
ZeroMemory(&ms,sizeof(D3D11_MAPPED_SUBRESOURCE));
_deviceContext->Map(_renderBuffer,NULL,D3D11_MAP_WRITE_DISCARD,NULL,&ms);
memcpy(ms.pData,&data[0],sizeof(VertexData)*data.size());
_deviceContext->Unmap(_renderBuffer,NULL);
And here is my rendering code:
_deviceContext->ClearRenderTargetView(_backBuffer,D3DXCOLOR(0.0f,0.0f,0.0f,1.0f));
_deviceContext->Draw(6,0);
_swapChain->Present(0,0);
EDIT: Disabled backface culling, but the triangle is still not appearing.
Mapping Function
void Render::CopyDataToBuffers(std::vector<VertexData> data)
{
D3D11_MAPPED_SUBRESOURCE ms;
ZeroMemory(&ms,sizeof(D3D11_MAPPED_SUBRESOURCE));
_deviceContext->Map(_renderBuffer,NULL,D3D11_MAP_WRITE_DISCARD,NULL,&ms);
memcpy(ms.pData,&data[0],sizeof(VertexData)*data.size());
_deviceContext->Unmap(_renderBuffer,NULL);
}
Calling of Mapping function
std::vector<VertexData> vertexDataVec;
vertexDataVec.push_back(vertexData[0]);
vertexDataVec.push_back(vertexData[1]);
vertexDataVec.push_back(vertexData[2]);
vertexDataVec.push_back(vertexData[3]);
vertexDataVec.push_back(vertexData[4]);
vertexDataVec.push_back(vertexData[5]);
Render::GetRender().CopyDataToBuffers(vertexDataVec);

To fix the problem, I just had to create a ID3D11RasterizerState and disable culling.
Here is the structure:
ID3D11RasterizerState* rasterizerState = NULL;
D3D11_RASTERIZER_DESC rd =
{
D3D11_FILL_SOLID,
D3D11_CULL_NONE,
TRUE,
1,
10.0F,
1.0F,
TRUE
};
_device->CreateRasterizerState(&rd,&rasterizerState);
_deviceContext->RSSetState(rasterizerState);

Related

DirectX: Draw bitmap image scale up in viewport caused low quality?

I'm using DirectX to draw the images with RGB data in buffer. The fllowing is sumary code:
// create the vertex buffer
D3D11_BUFFER_DESC bd;
ZeroMemory(&bd, sizeof(bd));
bd.Usage = D3D11_USAGE_DYNAMIC; // write access access by CPU and GPU
bd.ByteWidth = sizeOfOurVertices; // size is the VERTEX struct * pW*pH
bd.BindFlags = D3D11_BIND_VERTEX_BUFFER; // use as a vertex buffer
bd.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE; // allow CPU to write in buffer
dev->CreateBuffer(&bd, NULL, &pVBuffer); // create the buffer
//Create Sample for texture
D3D11_SAMPLER_DESC desc;
desc.Filter = D3D11_FILTER_ANISOTROPIC;
desc.MaxAnisotropy = 16;
ID3D11SamplerState *ppSamplerState = NULL;
dev->CreateSamplerState(&desc, &ppSamplerState);
devcon->PSSetSamplers(0, 1, &ppSamplerState);
//Create list vertices from RGB data buffer
pW = bitmapSource->PixelWidth;
pH = bitmapSource->PixelHeight;
OurVertices = new VERTEX[pW*pH];
vIndex = 0;
unsigned char* curP = rgbPixelsBuff;
for (y = 0; y < pH; y++)
{
for (x = 0; x < pW; x++)
{
OurVertices[vIndex].Color.b = *curP++;
OurVertices[vIndex].Color.g = *curP++;
OurVertices[vIndex].Color.r = *curP++;
OurVertices[vIndex].Color.a = *curP++;
OurVertices[vIndex].X = x;
OurVertices[vIndex].Y = y;
OurVertices[vIndex].Z = 0.0f;
vIndex++;
}
}
sizeOfOurVertices = sizeof(VERTEX)* pW*pH;
// copy the vertices into the buffer
D3D11_MAPPED_SUBRESOURCE ms;
devcon->Map(pVBuffer, NULL, D3D11_MAP_WRITE_DISCARD, NULL, &ms); // map the buffer
memcpy(ms.pData, OurVertices, sizeOfOurVertices); // copy the data
devcon->Unmap(pVBuffer, NULL);
// unmap the buffer
// clear the back buffer to a deep blue
devcon->ClearRenderTargetView(backbuffer, D3DXCOLOR(0.0f, 0.2f, 0.4f, 1.0f));
// select which vertex buffer to display
UINT stride = sizeof(VERTEX);
UINT offset = 0;
devcon->IASetVertexBuffers(0, 1, &pVBuffer, &stride, &offset);
// select which primtive type we are using
devcon->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_POINTLIST);
// draw the vertex buffer to the back buffer
devcon->Draw(pW*pH, 0);
// switch the back buffer and the front buffer
swapchain->Present(0, 0);
When the viewport's size is smaller or equal the image's size => everything is ok. But when viewport's size is lager image's size => the image's quality is very bad.
I've searched and tried to use desc.Filter = D3D11_FILTER_ANISOTROPIC;as above code (I've tried to use D3D11_FILTER_MIN_POINT_MAG_MIP_LINEAR or D3D11_FILTER_MIN_LINEAR_MAG_MIP_POINTtoo), but the result is not better. The following images are result of displaying:
Someone can tell me the way to fix it.
Many thanks!
You are drawing each pixel as a point using DirectX. It is normal that when the screen size gets bigger, your points will move apart and the quality will be bad. You should draw a textured quad instead, using a texture that you fill with your RGB data and a pixel shader.

DirectX11 HLSL shaders not running

I am totally noob to 3D drawing using DirectX so, I wanted to learn the very basics of it and so, I attempted to use a mix of every example I stumbled upon through the web.
My first objective is to simply draw a few lines on the screen but so far, the only thing I was able to realize is to clear the screen with some varying color...
In order to draw my 2D lines, I actually use HLSL vertex and pixel shaders compiled directly by the 2015 version of Visual Studio into cso files. (I initially had trouble with the pixel shader but managed to find that its properties have to be set )
When I use the Visual Studio Graphics Analyzer/Debugger, I can see the IA step which seems to be correct as the lines are drawn. But after this step, I can't see anything more although I can debug step by step in the vertex shader and I see the correct values in position and color parameters.
The main issues, here, are:
In pixel history, I can't see any Draw() call issued on the deviceContext. I can only see ClearRenderTarget()
The pixel shader displays the message "Stage did not run. No ouput"
I don't know what is wrong in the process, are the world/view/projection matrices or the depthStencilView mandatory? Did I forgot to provide a specific buffer to the swapChain and pipeline? I tried to disable depth, scissor, and culling in the rasterState object but I can't be sure.
I use a structure for my vertices which is :
#define LINES_NB 1000
struct Point
{
float x, y, z, rhw;
float r, g, b, a;
} lineList[LINES_NB];
Finally, here is the code for the VERTEX SHADER:
struct VIn
{
float4 position : POSITION;
float4 color : COLOR;
};
struct VOut
{
float4 position : SV_POSITION;
float4 color : COLOR;
};
VOut main(VIn input)
{
VOut output;
output.position = input.position;
output.color = input.color;
return output;
}
Which I compile with the following line :
/Zi /E"main" /Od /Fo"E:\PATH\VertexShader.cso" /vs"_5_0" /nologo
And the code for the PIXEL SHADER is the following:
struct PIn
{
float4 position : SV_POSITION;
float4 color : COLOR;
};
float4 main(PIn input) : SV_TARGET
{
return input.color;
}
Which I compile with the following line:
/Zi /E"main" /Od /Fo"E:\PATH\PixelShader.cso" /ps"_5_0" /nologo
This is the RASTERIZER STATE creation part:
D3D11_RASTERIZER_DESC rasterDesc;
rasterDesc.AntialiasedLineEnable = false;
rasterDesc.CullMode = D3D11_CULL_NONE;
rasterDesc.DepthBias = 0;
rasterDesc.DepthBiasClamp = 0.0f;
rasterDesc.DepthClipEnable = false;
rasterDesc.FillMode = D3D11_FILL_WIREFRAME;
rasterDesc.FrontCounterClockwise = true;
rasterDesc.MultisampleEnable = false;
rasterDesc.ScissorEnable = false;
rasterDesc.SlopeScaledDepthBias = 0.0f;
result = _device->CreateRasterizerState(&rasterDesc, &_rasterState);
if (FAILED(result))
{
OutputDebugString("FAILED TO CREATE RASTERIZER STATE.\n");
HR(result);
return -1;
}
_immediateContext->RSSetState(_rasterState);
And this is the INPUT LAYOUT registration part (_vertexShaderCode->code contains the contents of vertexShader.cso and _vertexShaderCode->size, the size of those contents):
// create the input layout object
D3D11_INPUT_ELEMENT_DESC ied[] =
{
{ "POSITION", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "COLOR", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 },
};
HR(_device->CreateInputLayout(ied, sizeof(ied) / sizeof(D3D11_INPUT_ELEMENT_DESC), _vertexShaderCode->code, _vertexShaderCode->size, &_vertexInputLayout));
_immediateContext->IASetInputLayout(_vertexInputLayout);
Where variables are declared as:
struct Shader
{
BYTE *code;
UINT size;
};
ID3D11Device* _device;
ID3D11DeviceContext* _immediateContext;
ID3D11RasterizerState* _rasterState;
ID3D11InputLayout* _vertexInputLayout;
Shader* _vertexShaderCode;
Shader* _pixelShaderCode;
My VERTEX BUFFER is created by calling createLinesBuffer once, and then, calling renderVertice for mapping it at every drawcall:
void DxDraw::createLinesBuffer(ID3D11Device* device)
{
D3D11_BUFFER_DESC vertexBufferDesc;
ZeroMemory(&vertexBufferDesc, sizeof(vertexBufferDesc));
vertexBufferDesc.Usage = D3D11_USAGE_DYNAMIC;
vertexBufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER;
vertexBufferDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
vertexBufferDesc.ByteWidth = sizeof(Point) * LINES_NB;
std::cout << "buffer size : " << sizeof(Point) * LINES_NB << std::endl;
vertexBufferDesc.MiscFlags = 0;
vertexBufferDesc.StructureByteStride = 0;
D3D11_SUBRESOURCE_DATA vertexBufferData;
ZeroMemory(&vertexBufferData, sizeof(vertexBufferData));
vertexBufferData.pSysMem = lineList;
std::cout << "lineList : " << lineList << std::endl;
vertexBufferData.SysMemPitch = 0;
vertexBufferData.SysMemSlicePitch = 0;
HR(device->CreateBuffer(&vertexBufferDesc, &vertexBufferData, &_vertexBuffer));
}
void DxDraw::renderVertice(ID3D11DeviceContext* ctx, UINT count, D3D11_PRIMITIVE_TOPOLOGY type)
{
D3D11_MAPPED_SUBRESOURCE ms;
ZeroMemory(&ms, sizeof(D3D11_MAPPED_SUBRESOURCE));
// map the buffer
HR(ctx->Map(_vertexBuffer, NULL, D3D11_MAP_WRITE_DISCARD, NULL, &ms));
// copy the data to it
memcpy(ms.pData, lineList, sizeof(lineList));
// unmap it
ctx->Unmap(_vertexBuffer, NULL);
// select which vertex buffer to display
UINT stride = sizeof(Point);
UINT offset = 0;
ctx->IASetVertexBuffers(0, 1, &_vertexBuffer, &stride, &offset);
// select which primtive type we are using
ctx->IASetPrimitiveTopology(type);
// draw the vertex buffer to the back buffer
ctx->Draw(count, 0);
}
There are many things that might have gone wrong here, potentially you can check:
is Input Layout properly declared? It looks like your Vertex Shader doesn't get any geometry
how do you declare your Rasterizer Stage? Sometime there might be some issues, like culling, depth clipping, etc.
how does your geometry look like? Do you apply any world/view/projection transformation before applying geometry to Input Assembler?
how do you construct your Vertex Buffer? Do you Map/Unmap it? Or maybe you construct this for every drawcall?
I cannot guarantee that this will help, but IMHO all of this is worth checking out.
As for no output from Pixel Shader, it seems that nothing was feeded to it - so there must be something either with VS output or RS clipped all the geometry somehow (because of culling, depth testing, etc.)
Copied from comment, since it solved the issue.
InputLayout looks OK, VertexBuffer looks ok either. At this point, I would check actual vertex coordinates. From your screenshot, it lookl like you're using pretty big numbers, like x = 271, y = 147. Normally, those position are transformed via World-View-Projection transformation, so they end up in <-1.0f;1.0f> range. Since you're not using any transformations, I would recommend to change your lineGenerator function so it generates geometry in <-1.0f; 1.0f> range for x and y coordinates. For example, if your current x value is 271, make it 0.271f

Wrong depth buffer (to texture) output?

For the SSAO effect I have to generate two textures: normals (in view space) and depth.
I decided to use depth buffer as texture according to Microsoft tutorial (the Reading the Depth-Stencil Buffer as a Texture chapter).
Unfortunately, after rendering I got none information from the depth buffer (the lower image):
I guess it's not right. And what is strange, the depth buffer seems to work (I get the right order of faces etc.).
The depth buffer code:
//create depth stencil texture (depth buffer)
D3D11_TEXTURE2D_DESC descDepth;
ZeroMemory(&descDepth, sizeof(descDepth));
descDepth.Width = width;
descDepth.Height = height;
descDepth.MipLevels = 1;
descDepth.ArraySize = 1;
descDepth.Format = DXGI_FORMAT_R24G8_TYPELESS;
descDepth.SampleDesc.Count = antiAliasing.getCount();
descDepth.SampleDesc.Quality = antiAliasing.getQuality();
descDepth.Usage = D3D11_USAGE_DEFAULT;
descDepth.BindFlags = D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE;
descDepth.CPUAccessFlags = 0;
descDepth.MiscFlags = 0;
ID3D11Texture2D* depthStencil = NULL;
result = device->CreateTexture2D(&descDepth, NULL, &depthStencil);
ERROR_HANDLE(SUCCEEDED(result), L"Could not create depth stencil texture.", MOD_GRAPHIC);
D3D11_SHADER_RESOURCE_VIEW_DESC shaderResourceViewDesc;
//setup the description of the shader resource view
shaderResourceViewDesc.Format = DXGI_FORMAT_R24_UNORM_X8_TYPELESS;
shaderResourceViewDesc.ViewDimension = antiAliasing.isOn() ? D3D11_SRV_DIMENSION_TEXTURE2DMS : D3D11_SRV_DIMENSION_TEXTURE2D;
shaderResourceViewDesc.Texture2D.MostDetailedMip = 0;
shaderResourceViewDesc.Texture2D.MipLevels = 1;
//create the shader resource view.
ERROR_HANDLE(SUCCEEDED(device->CreateShaderResourceView(depthStencil, &shaderResourceViewDesc, &depthStencilShaderResourceView)),
L"Could not create shader resource view for depth buffer.", MOD_GRAPHIC);
createDepthStencilStates();
//set the depth stencil state.
context->OMSetDepthStencilState(depthStencilState3D, 1);
D3D11_DEPTH_STENCIL_VIEW_DESC depthStencilViewDesc;
// Initialize the depth stencil view.
ZeroMemory(&depthStencilViewDesc, sizeof(depthStencilViewDesc));
// Set up the depth stencil view description.
depthStencilViewDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT;
depthStencilViewDesc.ViewDimension = antiAliasing.isOn() ? D3D11_DSV_DIMENSION_TEXTURE2DMS : D3D11_DSV_DIMENSION_TEXTURE2D;
depthStencilViewDesc.Texture2D.MipSlice = 0;
//depthStencilViewDesc.Flags = D3D11_DSV_READ_ONLY_DEPTH;
// Create the depth stencil view.
result = device->CreateDepthStencilView(depthStencil, &depthStencilViewDesc, &depthStencilView);
ERROR_HANDLE(SUCCEEDED(result), L"Could not create depth stencil view.", MOD_GRAPHIC);
After rendering with first pass, I set the depth stencil as texture resource along with other render targets (color, normals), appending it to array:
ID3D11ShaderResourceView ** textures = new ID3D11ShaderResourceView *[targets.size()+1];
for (unsigned i = 0; i < targets.size(); i++) {
textures[i] = targets[i]->getShaderResourceView();
}
textures[targets.size()] = depthStencilShaderResourceView;
context->PSSetShaderResources(0, targets.size()+1, textures);
Before second pass I call context->OMSetRenderTargets(1, &myRenderTargetView, NULL); to unbind depth buffer (so I can use it as texture).
Then, I render my textures (render targets from first pass + depth buffer) with trivial post-process shader, just for debug purpose (second pass):
Texture2D ColorTexture[3];
SamplerState ObjSamplerState;
float4 main(VS_OUTPUT input) : SV_TARGET0{
float4 Color;
Color = float4(0, 1, 1, 1);
float2 textureCoordinates = input.textureCoordinates.xy * 2;
if (input.textureCoordinates.x < 0.5f && input.textureCoordinates.y < 0.5f) {
Color = ColorTexture[0].Sample(ObjSamplerState, textureCoordinates);
}
if (input.textureCoordinates.x > 0.5f && input.textureCoordinates.y < 0.5f) {
textureCoordinates.x -= 0.5f;
Color = ColorTexture[1].Sample(ObjSamplerState, textureCoordinates);
}
if (input.textureCoordinates.x < 0.5f && input.textureCoordinates.y > 0.5f) { //depth texture
textureCoordinates.y -= 0.5f;
Color = ColorTexture[2].Sample(ObjSamplerState, textureCoordinates);
}
...
It works fine for normals texture. Why it doesn't for depth buffer (as shader resource view)?
As per comments:
The texture was rendered and sampled correctly but the data appeared to be uniformly red due to the data lying between 0.999 and 1.0f.
There are a few things you can do to improve the available depth precision, but the simplest of which is to simply ensure your near and far clip distances are not excessively small/large for the scene you're drawing.
Assuming metres are your unit, a near clip of 0.1 (10cm) and a far clip of 200 (metres) are much more reasonable than 1cm and 20km.
Even so, don't expect to see too many black/dark areas, the non linear nature of a z-buffer is still going to mean most of your depth values are shunted up towards 1. If visualisation of the depth buffer is important then simply rescale the data to the normalised 0-1 range before displaying it.

Direct3D multiple vertex buffers, non interleaved elements

I'm trying to create 2 vertex buffers, one that only stores positions and another that only stores colors. This is just an exercise from Frank Luna's book to become familiar with vertex description, layouts and buffer options. The problem is that I feel like I have made all the relevant changes and although I am getting the geometry to show, the color buffer is not working. To complete the exercise, instead of using the book's Vertex vertices[]={{...},{...},...} from the example code where each Vertex stores the position and the color, I am using XMFLOAT3 pos[]={x,y,z...} for positions and XMFLOAT4 colors[]={a,b,c,...}. I have modified the vertex description to make each element reference the correct input spot:
D3D11_INPUT_ELEMENT_DESC vertexDesc[] =
{
{"POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0},
{"COLOR", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 1, 0, D3D11_INPUT_PER_VERTEX_DATA, 0}
};
I've also changed the vertex buffer on the device to expect two vertex buffers instead of just mBoxVB which previously was a resource based on Vertex vertices[]:
ID3D11Buffer* mBoxVB;
ID3D11Buffer* mBoxVBCol;
ID3D11Buffer* combined[2];
...
combined[0]=mBoxVB;
combined[1]=mBoxVBCol;
...
UINT stride[] = {sizeof(XMFLOAT3), sizeof(XMFLOAT4)};
UINT offset = 0;
md3dImmediateContext->IASetVertexBuffers(0, 2, combined, stride, &offset);
If of any help, this is the buffer creation code:
D3D11_BUFFER_DESC vbd;
vbd.Usage = D3D11_USAGE_IMMUTABLE;
vbd.ByteWidth = sizeof(XMFLOAT3)*8;
vbd.BindFlags = D3D11_BIND_VERTEX_BUFFER;
vbd.CPUAccessFlags = 0;
vbd.MiscFlags = 0;
vbd.StructureByteStride = 0;
D3D11_SUBRESOURCE_DATA vinitData;
vinitData.pSysMem = Shapes::boxEx1Pos;
HR(md3dDevice->CreateBuffer(&vbd, &vinitData, &mBoxVB));
D3D11_BUFFER_DESC cbd;
vbd.Usage = D3D11_USAGE_IMMUTABLE;
vbd.ByteWidth = sizeof(XMFLOAT4)*8;
vbd.BindFlags = D3D11_BIND_VERTEX_BUFFER;
vbd.CPUAccessFlags = 0;
vbd.MiscFlags = 0;
vbd.StructureByteStride = 0;
D3D11_SUBRESOURCE_DATA cinitData;
cinitData.pSysMem = Shapes::boxEx1Col;
HR(md3dDevice->CreateBuffer(&vbd, &cinitData, &mBoxVBCol));
If I use combined[0]=mBoxVBCol, instead of combined[0]=mBoxVB I get a different shape so I know there is some data there but I'm not sure why the Color element is not getting sent through to the vertex shader, it seems to me that the second slot is being ignored in the operation.
I found the answer to the problem. It was simply that although I was specifying the stride array for both buffers, I was only giving one offset, so once I changed UINT offset=0; to UINT offset[]={0,0}; it worked.

DirectX 11: How to outline a bounding box? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
In a 2d platform game, I want to visualize the bounding box that I have created to aid debugging. How can I accomplish this in Visual C++ 2012?
First define a simple vertex structure:
struct Vertex
{
D3DXVECTOR3 position; //a 3D point even in 2D rendering
};
Now you can create vertex and index arrays:
Vertex *vertices;
unsigned long *indices = new unsigned long[5];
D3D11_BUFFER_DESC vertexBufferDesc, indexBufferDesc;
D3D11_SUBRESOURCE_DATA vertexData, indexData;
//create the vertex array
vertices = new Vertex[5];
if(!vertices)
{
//handle error
}
//load the vertex array with data
vertices[0].position = D3DXVECTOR3(left, top, 0.0f);
vertices[1].position = D3DXVECTOR3(right, top, 0.0f);
vertices[2].position = D3DXVECTOR3(right, bottom, 0.0f);
vertices[3].position = D3DXVECTOR3(left, bottom, 0.0f);
vertices[4].position = D3DXVECTOR3(left, top, 0.0f);
//create the index array
indices = new unsigned long[5];
if(!indices)
{
//handle error
}
//load the index array with data
for(i=0; i<5; i++)
indices[i] = i;
And load them into buffers:
ID3D11Buffer *vertexBuffer, *indexBuffer;
HRESULT result;
//set up the description of the dynamic vertex buffer
vertexBufferDesc.Usage = D3D11_USAGE_DYNAMIC; //enables recreation and movement of vertices
vertexBufferDesc.ByteWidth = sizeof(Vertex) * 5;
vertexBufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER;
vertexBufferDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE; //couples with dynamic
vertexBufferDesc.MiscFlags = 0;
vertexBufferDesc.StructureByteStride = 0;
//give the subresource structure a pointer to the vertex data
vertexData.pSysMem = vertices;
vertexData.SysMemPitch = 0;
vertexData.SysMemSlicePitch = 0;
//now create the vertex buffer
result = device->CreateBuffer(&vertexBufferDesc, &vertexData, &vertexBuffer);
if(FAILED(result))
{
//handle error
}
//set up the description of the static index buffer
indexBufferDesc.Usage = D3D11_USAGE_DEFAULT;
indexBufferDesc.ByteWidth = sizeof(unsigned long) * 5;
indexBufferDesc.BindFlags = D3D11_BIND_INDEX_BUFFER;
indexBufferDesc.CPUAccessFlags = 0;
indexBufferDesc.MiscFlags = 0;
indexBufferDesc.StructureByteStride = 0;
//give the subresource structure a pointer to the index data
indexData.pSysMem = indices;
indexData.SysMemPitch = 0;
indexData.SysMemSlicePitch = 0;
//create the index buffer
result = device->CreateBuffer(&indexBufferDesc, &indexData, &indexBuffer);
if(FAILED(result))
{
//handle error
}
Set up the rectangle to be rendered like this:
unsigned int stride = sizeof(Vertex);
unsigned int offset = 0;
deviceContext->IASetVertexBuffers(0, 1, &vertexBuffer, &stride, &offset);
deviceContext->IASetIndexBuffer(indexBuffer, DXGI_FORMAT_R32_UINT, 0);
deviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_LINESTRIP);
Now render with the shader of your choice, remembering to pass the orthographic matrix to the shader instead of the perspective matrix. VoilĂ ! Rectangle. But you can't move it yet... You have to define another function to do that:
bool UpdateRectBuffers(ID3D11Buffer *vertexBuffer, ID3D11DeviceContext *deviceContext, float top, float left, float bottom, float right)
{
Vertex *vertices;
D3D11_MAPPED_SUBRESOURCE mappedResource;
VertexType *verticesPtr;
HRESULT result;
//create a temporary vertex array to fill with the updated data
vertices = new Vertex[5];
if(!vertices)
{
return false;
}
vertices[0].position = D3DXVECTOR3(left, top, 0.0f);
vertices[1].position = D3DXVECTOR3(right, top, 0.0f);
vertices[2].position = D3DXVECTOR3(right, bottom, 0.0f);
vertices[3].position = D3DXVECTOR3(left, bottom, 0.0f);
vertices[4].position = D3DXVECTOR3(left, top, 0.0f);
//lock the vertex buffer so it can be written to
result = deviceContext->Map(vertexBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource);
if(FAILED(result))
{
return false;
}
verticesPtr = (Vertex*)mappedResource.pData;
//copy the data into the vertex buffer
memcpy(verticesPtr, (void*)vertices, (sizeof(Vertex) * 5));
deviceContext->Unmap(vertexBuffer, 0);
delete [] vertices;
vertices = 0;
return true;
}
The dependencies of this code are float top, float left, float right, float left, ID3D11DeviceContext *deviceContext, and ID3D11Device *device.
As you did not described well what you are able to do already, my answer is based on some assumptions.
Assumptions
So, I'm assuming that
you are able to draw a sprite (i.e. a colored/textured rectangle, i.e. quad of vertices / duet of triangles)
you already have data, that defines bounding volume (in any of several ways)
and will not explain how to do it.
Possible solutions
Variant 1:
You don't need anything special to draw edges. "Edges" (straight lines) are just long and thin rectangles. So, you need to put 4 thin rectangles in place where your edges should be.
This way you can choose thickness, color of line and even use a texture (dotted lines, dashed lines, lines with pink kittens, etc.) or shader effects, like procedural coloring, smoothing, blur, etc. Anyway, you will likely need lines for your game.
Variant 2:
You can draw lines instead of triangles. Use "line list" primitive topology, instead of "triangle list". (See: ID3D11DeviceContext::IASetPrimitiveTopology(), D3D11_PRIMITIVE_TOPOLOGY_LINELIST).
This way you cannot customize things. But this is much easier.
Variant 3:
Draw rectangle in wireframe mode. Just set up rasterizer state's fill mode. See: ID3D11DeviceContext::RSSetState, D3D11_RASTERIZER_DESC::FillMode, D3D11_FILL_MODE::D3D11_FILL_WIREFRAME
You'll get edges of triangles, even that ones that are diagonals of rectangle. This way you cannot set up neither thickness, nor color. But this way is really, really simple.
Variant 4:
Use any 2D drawing library that will do Variant 1 for you. As D3DX is obsolete, it is not recommended to use D3DXLine anymore. You can try DirectX toolkit or any library available on web.
Obviously you'll get additional dependency.
P.S.
If my initial assumptions were incorrect, then you'll not get an answer here. No men on StackOverflow will explain you such basic things. To correct a situation:
There are zillion ways to draw a rectangle. Pick any tutorial online (Ex.: rastertek, braynzarsoft) to get an idea on what's happening in "Possible solutions" part of this answer.
There are zillion ways to calculate bounding rectangle for each of zillion ways of defining it. Note, that to define a rectangle in 2D space, you need at least 2 points. And to define a 2D point, you need 2 coordinate values. So 4 values per each rectangle. Google or pick a math book for additional info.
Hope it helps!