DirectX11 HLSL shaders not running - c++

I am totally noob to 3D drawing using DirectX so, I wanted to learn the very basics of it and so, I attempted to use a mix of every example I stumbled upon through the web.
My first objective is to simply draw a few lines on the screen but so far, the only thing I was able to realize is to clear the screen with some varying color...
In order to draw my 2D lines, I actually use HLSL vertex and pixel shaders compiled directly by the 2015 version of Visual Studio into cso files. (I initially had trouble with the pixel shader but managed to find that its properties have to be set )
When I use the Visual Studio Graphics Analyzer/Debugger, I can see the IA step which seems to be correct as the lines are drawn. But after this step, I can't see anything more although I can debug step by step in the vertex shader and I see the correct values in position and color parameters.
The main issues, here, are:
In pixel history, I can't see any Draw() call issued on the deviceContext. I can only see ClearRenderTarget()
The pixel shader displays the message "Stage did not run. No ouput"
I don't know what is wrong in the process, are the world/view/projection matrices or the depthStencilView mandatory? Did I forgot to provide a specific buffer to the swapChain and pipeline? I tried to disable depth, scissor, and culling in the rasterState object but I can't be sure.
I use a structure for my vertices which is :
#define LINES_NB 1000
struct Point
{
float x, y, z, rhw;
float r, g, b, a;
} lineList[LINES_NB];
Finally, here is the code for the VERTEX SHADER:
struct VIn
{
float4 position : POSITION;
float4 color : COLOR;
};
struct VOut
{
float4 position : SV_POSITION;
float4 color : COLOR;
};
VOut main(VIn input)
{
VOut output;
output.position = input.position;
output.color = input.color;
return output;
}
Which I compile with the following line :
/Zi /E"main" /Od /Fo"E:\PATH\VertexShader.cso" /vs"_5_0" /nologo
And the code for the PIXEL SHADER is the following:
struct PIn
{
float4 position : SV_POSITION;
float4 color : COLOR;
};
float4 main(PIn input) : SV_TARGET
{
return input.color;
}
Which I compile with the following line:
/Zi /E"main" /Od /Fo"E:\PATH\PixelShader.cso" /ps"_5_0" /nologo
This is the RASTERIZER STATE creation part:
D3D11_RASTERIZER_DESC rasterDesc;
rasterDesc.AntialiasedLineEnable = false;
rasterDesc.CullMode = D3D11_CULL_NONE;
rasterDesc.DepthBias = 0;
rasterDesc.DepthBiasClamp = 0.0f;
rasterDesc.DepthClipEnable = false;
rasterDesc.FillMode = D3D11_FILL_WIREFRAME;
rasterDesc.FrontCounterClockwise = true;
rasterDesc.MultisampleEnable = false;
rasterDesc.ScissorEnable = false;
rasterDesc.SlopeScaledDepthBias = 0.0f;
result = _device->CreateRasterizerState(&rasterDesc, &_rasterState);
if (FAILED(result))
{
OutputDebugString("FAILED TO CREATE RASTERIZER STATE.\n");
HR(result);
return -1;
}
_immediateContext->RSSetState(_rasterState);
And this is the INPUT LAYOUT registration part (_vertexShaderCode->code contains the contents of vertexShader.cso and _vertexShaderCode->size, the size of those contents):
// create the input layout object
D3D11_INPUT_ELEMENT_DESC ied[] =
{
{ "POSITION", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "COLOR", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 },
};
HR(_device->CreateInputLayout(ied, sizeof(ied) / sizeof(D3D11_INPUT_ELEMENT_DESC), _vertexShaderCode->code, _vertexShaderCode->size, &_vertexInputLayout));
_immediateContext->IASetInputLayout(_vertexInputLayout);
Where variables are declared as:
struct Shader
{
BYTE *code;
UINT size;
};
ID3D11Device* _device;
ID3D11DeviceContext* _immediateContext;
ID3D11RasterizerState* _rasterState;
ID3D11InputLayout* _vertexInputLayout;
Shader* _vertexShaderCode;
Shader* _pixelShaderCode;
My VERTEX BUFFER is created by calling createLinesBuffer once, and then, calling renderVertice for mapping it at every drawcall:
void DxDraw::createLinesBuffer(ID3D11Device* device)
{
D3D11_BUFFER_DESC vertexBufferDesc;
ZeroMemory(&vertexBufferDesc, sizeof(vertexBufferDesc));
vertexBufferDesc.Usage = D3D11_USAGE_DYNAMIC;
vertexBufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER;
vertexBufferDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
vertexBufferDesc.ByteWidth = sizeof(Point) * LINES_NB;
std::cout << "buffer size : " << sizeof(Point) * LINES_NB << std::endl;
vertexBufferDesc.MiscFlags = 0;
vertexBufferDesc.StructureByteStride = 0;
D3D11_SUBRESOURCE_DATA vertexBufferData;
ZeroMemory(&vertexBufferData, sizeof(vertexBufferData));
vertexBufferData.pSysMem = lineList;
std::cout << "lineList : " << lineList << std::endl;
vertexBufferData.SysMemPitch = 0;
vertexBufferData.SysMemSlicePitch = 0;
HR(device->CreateBuffer(&vertexBufferDesc, &vertexBufferData, &_vertexBuffer));
}
void DxDraw::renderVertice(ID3D11DeviceContext* ctx, UINT count, D3D11_PRIMITIVE_TOPOLOGY type)
{
D3D11_MAPPED_SUBRESOURCE ms;
ZeroMemory(&ms, sizeof(D3D11_MAPPED_SUBRESOURCE));
// map the buffer
HR(ctx->Map(_vertexBuffer, NULL, D3D11_MAP_WRITE_DISCARD, NULL, &ms));
// copy the data to it
memcpy(ms.pData, lineList, sizeof(lineList));
// unmap it
ctx->Unmap(_vertexBuffer, NULL);
// select which vertex buffer to display
UINT stride = sizeof(Point);
UINT offset = 0;
ctx->IASetVertexBuffers(0, 1, &_vertexBuffer, &stride, &offset);
// select which primtive type we are using
ctx->IASetPrimitiveTopology(type);
// draw the vertex buffer to the back buffer
ctx->Draw(count, 0);
}

There are many things that might have gone wrong here, potentially you can check:
is Input Layout properly declared? It looks like your Vertex Shader doesn't get any geometry
how do you declare your Rasterizer Stage? Sometime there might be some issues, like culling, depth clipping, etc.
how does your geometry look like? Do you apply any world/view/projection transformation before applying geometry to Input Assembler?
how do you construct your Vertex Buffer? Do you Map/Unmap it? Or maybe you construct this for every drawcall?
I cannot guarantee that this will help, but IMHO all of this is worth checking out.
As for no output from Pixel Shader, it seems that nothing was feeded to it - so there must be something either with VS output or RS clipped all the geometry somehow (because of culling, depth testing, etc.)

Copied from comment, since it solved the issue.
InputLayout looks OK, VertexBuffer looks ok either. At this point, I would check actual vertex coordinates. From your screenshot, it lookl like you're using pretty big numbers, like x = 271, y = 147. Normally, those position are transformed via World-View-Projection transformation, so they end up in <-1.0f;1.0f> range. Since you're not using any transformations, I would recommend to change your lineGenerator function so it generates geometry in <-1.0f; 1.0f> range for x and y coordinates. For example, if your current x value is 271, make it 0.271f

Related

How do you update the vertex buffer or constant buffer attached to a sprite to move it smoothly across the screen in Direct3D 11?

I have attached a texture to a set of 4 indexed vertices which are stored in a dynamic vertex buffer. I have also added a translation matrix to the constant buffer of the vertex shader. However, when I try updating the constant buffer to alter the translation matrix so the I can move the sprite, the sprite does not move smoothly. It stops randomly for short amounts of time before moving a short distance again.
Below are the render functions, the main loop and the shaders being used:
void Sprite::Render(ID3D11DeviceContext* devcon, float dt) {
// 2D rendering on backbuffer here
UINT stride = sizeof(VERTEX);
UINT offset = 0;
spr_const_data.translateMatrix.r[3].m128_f32[0] += 60.0f*dt;
devcon->IASetInputLayout(m_data_layout);
devcon->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
devcon->VSSetShader(m_spr_vert_shader, 0, 0);
devcon->PSSetShader(m_spr_pixel_shader, 0, 0);
devcon->VSSetConstantBuffers(0, 1, &m_spr_const_buffer);
devcon->PSSetSamplers(0, 1, &m_tex_sampler_state);
devcon->PSSetShaderResources(0, 1, &m_shader_resource_view);
// select vertex and index buffers
devcon->IASetIndexBuffer(m_sprite_index_buffer, DXGI_FORMAT_R32_UINT, offset);
devcon->IASetVertexBuffers(0, 1, &m_sprite_vertex_buffer, &stride, &offset);
D3D11_MAPPED_SUBRESOURCE ms;
ZeroMemory(&ms, sizeof(D3D11_MAPPED_SUBRESOURCE));
devcon->Map(m_spr_const_buffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &ms);
memcpy(ms.pData, &spr_const_data, sizeof(spr_const_data));
devcon->Unmap(m_spr_const_buffer, 0);
// select which primitive type to use
// draw vertex buffer to backbuffer
devcon->DrawIndexed(6, 0, 0);
}
void RenderFrame(float dt) {
float background_color[] = { 1.0f, 1.0f, 1.0f, 1.0f };
// clear backbuffer
devcon->ClearRenderTargetView(backbuffer, background_color);
knight->Render(devcon, dt);
// switch back and front buffer
swapchain->Present(0, 0);
}
void MainLoop() {
MSG msg;
auto tp1 = std::chrono::system_clock::now();
auto tp2 = std::chrono::system_clock::now();
while (GetMessage(&msg, nullptr, 0, 0) > 0) {
tp2 = std::chrono::system_clock::now();
std::chrono::duration<float> dt = tp2 - tp1;
tp1 = tp2;
TranslateMessage(&msg);
DispatchMessage(&msg);
RenderFrame(dt.count());
}
}
cbuffer CONST_BUFFER_DATA : register(b0)
{
matrix orthoMatrix;
matrix translateMatrix;
};
struct VOut {
float4 position : SV_POSITION;
float2 tex : TEXCOORD0;
};
VOut VShader(float4 position : POSITION, float2 tex : TEXCOORD0) {
VOut output;
position = mul(translateMatrix, position);
output.position = mul(orthoMatrix, position);
output.tex = tex;
return output;
}
Texture2D square_tex;
SamplerState tex_sampler;
float4 PShader(float4 position : SV_POSITION, float2 tex : TEXCOORD0) : SV_TARGET
{
float4 tex_col = square_tex.Sample(tex_sampler, tex);
return tex_col;
}
I have also included the initialization of the swapchain and backbuffer in case there is a mistake owing to it.
DXGI_SWAP_CHAIN_DESC scd; // hold swap chain information
ZeroMemory(&scd, sizeof(DXGI_SWAP_CHAIN_DESC));
// fill swap chian description struct
scd.BufferCount = 1; // one back buffer
scd.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; // use 32-bit color
scd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; // how swap chain is to be used (draw into back buffer)
scd.OutputWindow = hWnd; // window to be used
scd.SampleDesc.Count = 1; // how many multisamples
scd.Windowed = true; // windowed/full screen
// create device, device context and swap chain using scd
D3D11CreateDeviceAndSwapChain(nullptr, D3D_DRIVER_TYPE_HARDWARE, nullptr, D3D11_CREATE_DEVICE_DEBUG, nullptr, NULL, D3D11_SDK_VERSION, &scd, &swapchain, &dev, nullptr, &devcon);
// get address of back buffer
ID3D11Texture2D* pBackBuffer;
swapchain->GetBuffer(0, __uuidof(ID3D11Texture2D), (LPVOID*)&pBackBuffer);
// use back buffer address to create render target
dev->CreateRenderTargetView(pBackBuffer, nullptr, &backbuffer);
pBackBuffer->Release();
// set the render target as the backbuffer
devcon->OMSetRenderTargets(1, &backbuffer, nullptr);
// Set the viewport
D3D11_VIEWPORT viewport;
ZeroMemory(&viewport, sizeof(D3D11_VIEWPORT));
viewport.TopLeftX = 0;
viewport.TopLeftY = 0;
viewport.Width = WINDOW_WIDTH;
viewport.Height = WINDOW_HEIGHT;
devcon->RSSetViewports(1, &viewport); // activates viewport
I have also tried getting a pointer to the vertex data via the pData member of the D3D11_MAPPED_SUBRESOURCE object and then casting it to a VERTEX* to manipulate the data, but the problem persists. I would like to know how to move the sprite smoothly across the window.
I solved the problem by writing a fixed FPS game loop and giving the interpolated values between the previous and current states to the render function. I was also updating the constant buffer in an incorrect manner.

Cubemap texturing issue (D3D11, C++)

I have a texture problem with the cubemap I'm rendering and can't seem to figure it out. I've generated a cube map with direct x's texture tools and then read it using
D3DX11CreateShaderResourceViewFromFile(device, L"cubemap.dds", 0, 0, &fullcubemap, 0);
The cubemap texture is not high quality at all and it looks really stretched/distorted. I can definitely tell that the images used for the cubemap match correctly, but it's not great at all at the moment
I'm not sure why this is happening. Is it because my textures are too large/small or is it something else? If it's due to the size of the textures, what is a recommended texture size? I am using a sphere for the cubemap not a cube.
Edit:
Shader:
cbuffer SkyboxConstantBuffer {
float4x4 world;
float4x4 view;
float4x4 projection;
};
TextureCube gCubeMap;
SamplerState samTriLinearSam {
Filter = MIN_MAG_MIP_LINEAR;
AddressU = Wrap;
AddressV = Wrap;
};
struct VertexIn {
float4 position : POSITION;
};
struct VertexOut {
float4 position : SV_POSITION;
float4 spherePosition : POSITION;
};
VertexOut VS(VertexIn vin) {
VertexOut vout = (VertexOut)0;
vin.position.w = 1.0f;
vout.position = mul(vin.position, world);
vout.position = mul(vout.position, view);
vout.position = mul(vout.position, projection);
vout.spherePosition = vin.position;
return vout;
}
float4 PS(VertexOut pin) : SV_Target {
return gCubeMap.Sample(samTriLinearSam, pin.spherePosition);//float4(1.0, 0.5, 0.5, 1.0);
}
RasterizerState NoCull {
CullMode = None;
};
DepthStencilState LessEqualDSS {
DepthFunc = LESS_EQUAL;
};
technique11 SkyTech {
pass p0 {
SetVertexShader(CompileShader(vs_4_0, VS()));
SetGeometryShader(NULL);
SetPixelShader(CompileShader(ps_4_0, PS()));
SetRasterizerState(NoCull);
SetDepthStencilState(LessEqualDSS, 0);
}
}
Draw:
immediateContext->OMSetRenderTargets(1, &renderTarget, nullptr);
XMMATRIX sworld, sview, sprojection;
SkyboxConstantBuffer scb;
sview = XMLoadFloat4x4(&_view);
sprojection = XMLoadFloat4x4(&_projection);
sworld = XMLoadFloat4x4(&_world);
scb.world = sworld;
scb.view = sview;
scb.projection = sprojection;
immediateContext->IASetIndexBuffer(cubeMapSphere->getIndexBuffer(), DXGI_FORMAT_R32_UINT, 0);
ID3D11Buffer* vertexBuffer = cubeMapSphere->getVertexBuffer();
//ID3DX11EffectShaderResourceVariable * cMap;
////cMap = skyboxShader->GetVariableByName("gCubeMap")->AsShaderResource();
immediateContext->PSSetShaderResources(0, 1, &fullcubemap);//textures
//cMap->SetResource(fullcubemap);
immediateContext->IASetVertexBuffers(0, 1, &vertexBuffer, &stride, &offset);
immediateContext->VSSetShader(skyboxVertexShader, nullptr, 0);
immediateContext->VSSetConstantBuffers(0, 1, &skyboxConstantBuffer);
immediateContext->PSSetConstantBuffers(0, 1, &skyboxConstantBuffer);
immediateContext->PSSetShader(skyboxPixelShader, nullptr, 0);
immediateContext->UpdateSubresource(skyboxConstantBuffer, 0, nullptr, &scb, 0, 0);
immediateContext->DrawIndexed(cubeMapSphere->getIndexBufferSize(), 0, 0);
Initially I was planning to use this snippet to update the TextureCube variable in the shader
ID3DX11EffectShaderResourceVariable * cMap;
cMap = skyboxShader->GetVariableByName("gCubeMap")->AsShaderResource();
cMap->SetResource(fullcubemap);
But it seems that has no effect, and in fact, without the following line, the sphere I'm using for the cubemap textures with a texture used with another object in the scene, so perhaps there's something going on here? I'm not sure what though.
immediateContext->PSSetShaderResources(0, 1, &fullcubemap);//textures
Edit: Probably not the above, realised that if this wasn't updated, the old texture would be applied as it's never wiped after each draw.
Edit: Tried the cubemap with both a sphere and a cube, still the same texture issue.
Edit: Tried loading the shader resource view differently
D3DX11_IMAGE_LOAD_INFO loadSMInfo;
loadSMInfo.MiscFlags = D3D11_RESOURCE_MISC_TEXTURECUBE;
ID3D11Texture2D* SMTexture = 0;
hr = D3DX11CreateTextureFromFile(device, L"cubemap.dds",
&loadSMInfo, 0, (ID3D11Resource**)&SMTexture, 0);
D3D11_TEXTURE2D_DESC SMTextureDesc;
SMTexture->GetDesc(&SMTextureDesc);
D3D11_SHADER_RESOURCE_VIEW_DESC SMViewDesc;
SMViewDesc.Format = SMTextureDesc.Format;
SMViewDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURECUBE;
SMViewDesc.TextureCube.MipLevels = SMTextureDesc.MipLevels;
SMViewDesc.TextureCube.MostDetailedMip = 0;
hr = device->CreateShaderResourceView(SMTexture, &SMViewDesc, &fullcubemap);
Still produces the same output, any ideas?
Edit: Tried increasing the zfar distance and the texture remains the exact same no matter what value I put.
Example with second texture with increased view distance.
This texture is used on another object in my scene and comes out fine.
Edit: I have been trying to mess with the scaling of the texture/object
To achieve this I used
vin.position = vin.position * 50.0f;
This is beginning to look sort of like how it should, however, when I turn my camera angle, the image disappears so I obviously know this isn't correct, but if I could just scale the image per pixel or per vertex properly, I'm sure I could get the end result.
Edit:
I can confirm the cubemap is rendering correctly, I was ignoring the view/projection space and just using world and managed to get this, which is the high quality image i'm after, just not correct. Yes the faces are incorrect, but I'm not fussed about that now, it's easy enough to swap them around, I just need to get it rendering with this quality, in the correct space.
When in camera space does it take into account whether or not it's the outside/inside of the sphere? If my textures were over the outside of the sphere and I have the view from the inside, it's not going to look the same?
Issue is with your texture size, its small, you are applying it on larger surface, Make larger textures with more pixels
Its confirm that zfar and scaling has nothing to do with it.
Finally found the issue, silly mistake.
scb.world = XMMatrixTranspose(sworld);
scb.view = XMMatrixTranspose(sview);
scb.projection = XMMatrixTranspose(sprojection);

Problems sampling D3D11 depth buffer

I'm getting everything ready in a little DirectX 11.0 project of mine for a deferred rendering pipeline. However, I've been having quite a lot of trouble with sampling the depth buffer from within a pixel shader.
First I define the depth texture and its shader resource view:
D3D11_TEXTURE2D_DESC depthTexDesc;
ZeroMemory(&depthTexDesc, sizeof(depthTexDesc));
depthTexDesc.Width = nWidth;
depthTexDesc.Height = nHeight;
depthTexDesc.Format = DXGI_FORMAT_R32_TYPELESS;
depthTexDesc.Usage = D3D11_USAGE_DEFAULT;
depthTexDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE;
depthTexDesc.MipLevels = 1;
depthTexDesc.ArraySize = 1;
depthTexDesc.SampleDesc.Count = 1;
depthTexDesc.SampleDesc.Quality = 0;
depthTexDesc.CPUAccessFlags = 0;
depthTexDesc.MiscFlags = 0;
hresult = d3dDevice_->CreateTexture2D(&depthTexDesc, nullptr, &depthTexture_);
D3D11_DEPTH_STENCIL_VIEW_DESC DSVDesc;
ZeroMemory(&DSVDesc, sizeof(DSVDesc));
DSVDesc.Format = DXGI_FORMAT_D32_FLOAT;
DSVDesc.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2D;
DSVDesc.Texture2D.MipSlice = 0;
hresult = d3dDevice_->CreateDepthStencilView(depthTexture_, &DSVDesc, &depthView_);
D3D11_SHADER_RESOURCE_VIEW_DESC gbDepthTexDesc;
ZeroMemory(&gbDepthTexDesc, sizeof(gbDepthTexDesc));
gbDepthTexDesc.Format = DXGI_FORMAT_R32_FLOAT;
gbDepthTexDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
gbDepthTexDesc.Texture2D.MipLevels = 1;
gbDepthTexDesc.Texture2D.MostDetailedMip = -1;
d3dDevice_->CreateShaderResourceView(depthTexture_, &gbDepthTexDesc, &gbDepthView_);
Here's the relevant part of my rendering function:
float clearColor[4] = { 0.0f, 0.0f, 0.0f, 1.0f };
d3dContext_->ClearRenderTargetView(backBufferTarget_, clearColor);
d3dContext_->ClearDepthStencilView(depthView_, D3D11_CLEAR_DEPTH, 1.0f, 0);
// GBuffer packing pass (in the future): /////////////////////////////////////////
d3dContext_->OMSetRenderTargets(1, &backBufferTarget_, depthView_);
unsigned int nStride = sizeof(Vertex);
unsigned int nOffset = 0;
d3dContext_->IASetInputLayout(inputLayout_);
d3dContext_->IASetVertexBuffers(0, 1, &vertexBuffer_, &nStride, &nOffset);
d3dContext_->IASetIndexBuffer(indexBuffer_, DXGI_FORMAT_R32_UINT, 0);
d3dContext_->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
d3dContext_->VSSetShader(firstVS_, 0, 0);
d3dContext_->PSSetShader(firstPS_, 0, 0);
d3dContext_->DrawIndexed(nIndexCount_, 0, 0);
d3dContext_->OMSetRenderTargets(1, &backBufferTarget_, nullptr);
d3dContext_->VSSetShader(secondVS_, 0, 0);
d3dContext_->PSSetShader(secondPS_, 0, 0);
d3dContext_->PSGetShaderResources(0, 1, &gbDepthView_);
d3dContext_->PSSetSamplers(0, 1, &colorMapSampler_);
d3dContext_->DrawIndexed(nIndexCount_, 0, 0);
swapChain_->Present(0, 0);
In this temporary implementation, firstVS_ and secondVS_ are identical, and their only function is to do all the transforms and pass on the data to the PSs.
And finally, here are firstPS_ and secondPS_:
// firstPS_
float4 main(PS_Input frag) : SV_TARGET
{
return float4(1.0f, 1.0f, 1.0f, 1.0f);
}
// secondPS_
Texture2D<float> depthMap_ : register(t0);
SamplerState colorSampler_ : register(s0);
float4 main(PS_Input frag) : SV_TARGET
{
float4 psOut;
psOut.xyz = depthMap_.Sample(colorSampler_, frag.tex0).xxx;
psOut.w = 1.0f;
return psOut;
}
So, my actual questions:
1) All this code compiles without any issues, but when I sample the depth buffer, it just turns out black. I read this could be caused by having your depth & stencil view bound by D3D11DeviceContext::OMSetRenderTargets() at the time you want to sample the depth buffer. I fixed that, but the buffer is still black. I checked the graphics debugger, with no success. So, is my depth buffer not getting written correctly, or am I sampling the wrong way? firstPS_ works fine.
2) Speaking of sampling, the book I'm using just says "we'll be using a point sampler," but I have no idea what is exactly meant. Now I'm just using a standard texture map sampler, but is there something else I should sample with?
3) Also, the book uses the SamplerState.Gather() function in secondPS_, but when I tried that it complained that "the expression could not be mapped to pixel shader instruction set." Is Gather() an error in the book, or is it my GPU (D3D feature level 11.0) who doesn't understand what that is? Is Sample() good enough for what I want to do? The original use of Gather() was in the context of creating a silhouette around objects in the depth buffer.
4) I tried to get secondVS_ to draw nothing but a full-screen quad, but FXC complained about my use of SV_VertexID as "invalid," saying that my type should be integral, even though it already was. I read somewhere that SV_VertexID can only be used by the first VS in a pipeline. Is that the problem here? How do I solve this in this particular case? In my current inefficient solution, is the problem being caused by the UVs?
1) You've called PSGetShaderResources instead of PSSetShaderResources. Also, MostDetailedMip should be 0 not -1.
2) A "point sampler" is just a texture sampler with the FILTER field set to something like D3D11_FILTER_MIN_MAG_MIP_POINT.
3) Gather is a function on the Texture2D not the SamplerState as you stated.
4) You get this error if you compile using vs_4_0, try vs_5_0.

Missing some colors from PNG texture in DirectX during loading and saving?

I use standard DirectX functions (like CreateTexture2D, D3DX11SaveTextureToFile and D3DX11CreateShaderResourceViewFromFile) to load the PNG image, render it on new created texture and than save to file. All the textures are the power of two sizes.
But during it, I have noticed, that some colors from PNG are a little corrupted (similar but not the same as the colors from the source texture). The same with transparency (it works for 0 and 100% transparency parts, but not for e.g. 34%).
Are there some big color-approximations or I do something wrong? If so, how can I solve it?
Here are these two images (left is source: a little different colors and some gradient transparency on the bottom; right is image after loading first image and render it on the new texture, that was than saved to file):
I don't know what cause that behaviour, maybe the new texture's description:
textureDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
I have tried to change it to DXGI_FORMAT_R32G32B32A32_FLOAT, but the effect was even stranger:
Here is the code for rendering source texture on the new texture:
context->OMSetRenderTargets(1, &renderTargetView, depthStencilView); //to render on new texture instead of the screen
float clearColor[4] = {0.0f, 0.0f, 0.0f, 0.0f}; //red, green, blue, alpha
context->ClearRenderTargetView(renderTargetView, clearColor);
//clear the depth buffer to 1.0 (max depth)
context->ClearDepthStencilView(depthStencilView, D3D11_CLEAR_DEPTH, 1.0f, 0);
//rendering
turnZBufferOff();
shader->set(context);
object->render(shader, camera, textureManager, context, 0);
swapChain->Present(0, 0);
And in object->render():
UINT stride;
stride = sizeof(Vertex);
UINT offset = 0;
context->IASetVertexBuffers( 0, 1, &buffers->vertexBuffer, &stride, &offset ); //set vertex buffer
context->IASetIndexBuffer( buffers->indexBuffer, DXGI_FORMAT_R16_UINT, 0 ); //set index buffer
context->IASetPrimitiveTopology( D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST ); //set primitive topology
if(textureID){
context->PSSetShaderResources( 0, 1, &textureManager->get(textureID)->texture);
}
ConstantBuffer2DStructure cbPerObj;
cbPerObj.positionAndScale = XMFLOAT4(center.getX(), center.getY(), halfSize.getX(), halfSize.getY());
cbPerObj.textureCoordinates = XMFLOAT4(textureRectToUse[0].getX(), textureRectToUse[0].getY(), textureRectToUse[1].getX(), textureRectToUse[1].getY());
context->UpdateSubresource(constantBuffer, 0, NULL, &cbPerObj, 0, 0);
context->VSSetConstantBuffers(0, 1, &constantBuffer);
context->PSSetConstantBuffers(0, 1, &constantBuffer);
context->DrawIndexed(6, 0, 0);
The shader is very simple:
VS_OUTPUT VS(float4 inPos : POSITION, float2 inTexCoord : TEXCOORD)
{
VS_OUTPUT output;
output.Pos.zw = float2(0.0f, 1.0f);
//inPos(x,y) = {-1,1}
output.Pos.xy = (inPos.xy * positionAndScale.zw) + positionAndScale.xy;
output.TexCoord.xy = inTexCoord.xy * (textureCoordinates.zw - textureCoordinates.xy) + textureCoordinates.xy;
return output;
}
float4 PS(VS_OUTPUT input) : SV_TARGET
{
return ObjTexture.Sample(ObjSamplerState, input.TexCoord);
}
For some optimalisation I parse the sprite's size as the shader's param (it works fine, the size of texture, borders etc. are right).
Did you set blend states around? Alpha will not work by default since default blend is no blend at all.
Here is a standard alpha blend state:
D3D11_BLEND_DESC desc;
desc.AlphaToCoverageEnable=false;
desc.IndependentBlendEnable = false;
for (int i =0; i < 8 ; i++)
{
desc.RenderTarget[i].BlendEnable = true;
desc.RenderTarget[i].BlendOp = D3D11_BLEND_OP::D3D11_BLEND_OP_ADD;
desc.RenderTarget[i].BlendOpAlpha = D3D11_BLEND_OP::D3D11_BLEND_OP_ADD;
desc.RenderTarget[i].DestBlend = D3D11_BLEND::D3D11_BLEND_INV_SRC_ALPHA;
desc.RenderTarget[i].DestBlendAlpha = D3D11_BLEND::D3D11_BLEND_ONE;
desc.RenderTarget[i].RenderTargetWriteMask = D3D11_COLOR_WRITE_ENABLE::D3D11_COLOR_WRITE_ENABLE_ALL;
desc.RenderTarget[i].SrcBlend = D3D11_BLEND::D3D11_BLEND_SRC_ALPHA;
desc.RenderTarget[i].SrcBlendAlpha = D3D11_BLEND::D3D11_BLEND_ONE;
}
ID3D11BlendState* state;
device->CreateBlendState(&desc,&state);
return state;
Also I would use Clear with alpha component set to 1 instead of 0
I'm suggesting that your problems are stemming from importing a layered Fireworks PNG file. Fireworks layered PNGs retain their layers when imported into other softwares like Flash and Freehand. However, in order to have an exact replication of a layered Fireworks PNG in Photoshop, it's necessary to export that layered PNG as a flattened PNG. Thus, opening it in Photoshop and flattening it is not the solution; the solution lies in opening it and flattening it in Fireworks. (Note: PNGs can be 8, 24 or 32-bit...maybe that needs to be accounted for in your analysis.)

Can't create vertex buffer

I have a Windows Phone 8 C#/XAML project with DirectX component. I'm trying to rendering some particles. I create a vertex buffer, which I saw it go into the function to create, but when it gets to an update vertex buffer, the buffer is NULL. I have not released the buffers yet. Do you know why this will happen? Does any of the output messages help? Thanks.
Printouts and errors on my Output Window:
'TaskHost.exe' (Win32): Loaded '\Device\HarddiskVolume4\Windows\System32\d3d11_1SDKLayers.dll'. Cannot find or open the PDB file.
D3D11 WARNING: ID3D11Texture2D::SetPrivateData: Existing private data of same name with different size found! [ STATE_SETTING WARNING #55: SETPRIVATEDATA_CHANGINGPARAMS]
Create vertex buffer
D3D11 WARNING: ID3D11DeviceContext::DrawIndexed: The Pixel Shader unit expects a Sampler to be set at Slot 0, but none is bound. This is perfectly valid, as a NULL Sampler maps to default Sampler state. However, the developer may not want to rely on the defaults. [ EXECUTION WARNING #352: DEVICE_DRAW_SAMPLER_NOT_SET]
The thread 0xb64 has exited with code 0 (0x0).
m_vertexBuffer is null
D3D11 WARNING: ID3D11DeviceContext::DrawIndexed: The Pixel Shader unit expects a Sampler to be set at Slot 0, but none is bound. This is perfectly valid, as a NULL Sampler maps to default Sampler state. However, the developer may not want to rely on the defaults. [ EXECUTION WARNING #352: DEVICE_DRAW_SAMPLER_NOT_SET]
m_vertexBuffer is null
In CreateDeviceResources, I call CreateVertexShader, CreateInputLayout, CreatePixelShader, CreateBuffer. Then I get to creating the sampler and vertex buffer, code below:
auto createCubeTask = (createPSTask && createVSTask).then([this] () {
// Create a texture sampler state description.
D3D11_SAMPLER_DESC samplerDesc;
samplerDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR;
samplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.MipLODBias = 0.0f;
samplerDesc.MaxAnisotropy = 1;
samplerDesc.ComparisonFunc = D3D11_COMPARISON_ALWAYS;
samplerDesc.BorderColor[0] = 0;
samplerDesc.BorderColor[1] = 0;
samplerDesc.BorderColor[2] = 0;
samplerDesc.BorderColor[3] = 0;
samplerDesc.MinLOD = 0;
samplerDesc.MaxLOD = D3D11_FLOAT32_MAX;
// Create the texture sampler state.
HRESULT result = m_d3dDevice->CreateSamplerState(&samplerDesc, &m_sampleState);
if(FAILED(result))
{
OutputDebugString(L"Can't CreateSamplerState");
}
//InitParticleSystem();
// Set the maximum number of vertices in the vertex array.
m_vertexCount = m_maxParticles * 6;
// Set the maximum number of indices in the index array.
m_indexCount = m_vertexCount;
// Create the vertex array for the particles that will be rendered.
m_vertices = new VertexType[m_vertexCount];
if(!m_vertices)
{
OutputDebugString(L"Can't create the vertex array for the particles that will be rendered.");
}
else
{
// Initialize vertex array to zeros at first.
int sizeOfVertexType = sizeof(VertexType);
int totalSizeVertex = sizeOfVertexType * m_vertexCount;
memset(m_vertices, 0, totalSizeVertex);
D3D11_SUBRESOURCE_DATA vertexBufferData = {0};
vertexBufferData.pSysMem = m_vertices;
vertexBufferData.SysMemPitch = 0;
vertexBufferData.SysMemSlicePitch = 0;
int sizeOfMVertices = sizeof(m_vertices);
CD3D11_BUFFER_DESC vertexBufferDesc(
totalSizeVertex, // byteWidth
D3D11_BIND_VERTEX_BUFFER, // bindFlags
D3D11_USAGE_DYNAMIC, // D3D11_USAGE usage = D3D11_USAGE_DEFAULT
D3D11_CPU_ACCESS_WRITE, // cpuaccessFlags
0, // miscFlags
0 // structureByteStride
);
OutputDebugString(L"Create vertex buffer\n");
DX::ThrowIfFailed(
m_d3dDevice->CreateBuffer(
&vertexBufferDesc,
&vertexBufferData,
&m_vertexBuffer
)
);
}
unsigned long* indices = new unsigned long[m_indexCount];
if(!indices)
{
OutputDebugString(L"Can't create the index array.");
}
else
{
// Initialize the index array.
for(int i=0; i<m_indexCount; i++)
{
indices[i] = i;
}
// Set up the description of the static index buffer.
// Create the index array.
D3D11_BUFFER_DESC indexBufferDesc;
indexBufferDesc.Usage = D3D11_USAGE_DEFAULT;
indexBufferDesc.ByteWidth = sizeof(unsigned long) * m_indexCount;
indexBufferDesc.BindFlags = D3D11_BIND_INDEX_BUFFER;
indexBufferDesc.CPUAccessFlags = 0;
indexBufferDesc.MiscFlags = 0;
indexBufferDesc.StructureByteStride = 0;
// Give the subresource structure a pointer to the index data.
D3D11_SUBRESOURCE_DATA indexData;
indexData.pSysMem = indices;
indexData.SysMemPitch = 0;
indexData.SysMemSlicePitch = 0;
// Create the index buffer.
DX::ThrowIfFailed(
m_d3dDevice->CreateBuffer(
&indexBufferDesc,
&indexData,
&m_indexBuffer
)
);
// Release the index array since it is no longer needed.
delete [] indices;
indices = 0;
}
});
createCubeTask.then([this] () {
m_loadingComplete = true;
});
}
My vertex is position, tex, and color:
struct VertexType
{
DirectX::XMFLOAT3 position;
DirectX::XMFLOAT2 texture;
DirectX::XMFLOAT4 color;
};
Microsoft::WRL::ComPtr<ID3D11SamplerState> m_sampleState;
VertexType* m_vertices;
Microsoft::WRL::ComPtr<ID3D11Buffer> m_vertexBuffer;
const D3D11_INPUT_ELEMENT_DESC vertexDesc[] =
{
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "COLOR", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 },
};
VertexShader.HLSL:
cbuffer ModelViewProjectionConstantBuffer : register(b0)
{
matrix model;
matrix view;
matrix projection;
};
struct VertexInputType
{
float4 position : POSITION;
float2 tex : TEXCOORD0;
float4 color : COLOR;
};
struct PixelInputType
{
float4 position : SV_POSITION;
float2 tex : TEXCOORD0;
float4 color : COLOR;
};
PixelInputType main(VertexInputType input)
{
PixelInputType output;
// Change the position vector to be 4 units for proper matrix calculations.
input.position.w = 1.0f;
// Calculate the position of the vertex against the world, view, and projection matrices.
output.position = mul(input.position, model);
output.position = mul(output.position, view);
output.position = mul(output.position, projection);
// Store the texture coordinates for the pixel shader.
output.tex = input.tex;
// Store the particle color for the pixel shader.
output.color = input.color;
return output;
}
After the call to: m_d3dDevice->CreateBuffer(&vertexBufferDesc, &vertexBufferData, &m_vertexBuffer) the m_vertexBuffer is not null.
But when I get to my Update() function, m_vertexBuffer is NULL!
D3D11_MAPPED_SUBRESOURCE mappedResource;
if (m_vertexBuffer == nullptr)
{
OutputDebugString(L"m_vertexBuffer is null\n");
}
else
{
// Lock the vertex buffer.
DX::ThrowIfFailed(m_d3dContext->Map(m_vertexBuffer.Get(), 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource));
// Get a pointer to the data in the vertex buffer.
VertexType * verticesPtr = (VertexType*)mappedResource.pData;
//// Copy the data into the vertex buffer.
int sizeOfVertices = sizeof(VertexType) * m_vertexCount;
memcpy(verticesPtr, (void*)m_vertices, sizeOfVertices);
//// Unlock the vertex buffer.
m_d3dContext->Unmap(m_vertexBuffer.Get(), 0);
}
About the sampler, you need to assign it to your pixelshader.
m_d3dContext.Get()->PSSetSamplers(0,1,&m_sampleState);
I was making the incorrect call on m_vertexBuffer when I go Render it.
This fixes the error with the m_vertexBuffer being null.
Previous:
m_d3dContext.Get()->IASetVertexBuffers(0, 1, &m_vertexBuffer, &stride, &offset);
Fix:
m_d3dContext.Get()->IASetVertexBuffers(0, 1, m_vertexBuffer.GetAddressOf(), &stride, &offset);
It doesn't fix the error with the sampler states though:
DrawIndexed: The Pixel Shader unit expects a Sampler to be set at Slot
0