DirectX 10 swap chain scaling - c++

I want to disable swap chain scaling on windows.
I'm rendering 1600x900 quad on the top of window (1366x768) but it's streched.
Desc_.BufferCount = 1;
Desc_.OutputWindow = windowWnd;
Desc_.Windowed = true;
Desc_.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
Desc_.BufferDesc.Height = height;
Desc_.BufferDesc.Width = width;
Desc_.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
Desc_.SampleDesc.Count = 1;
Desc_.SampleDesc.Quality = 0;
Desc_.BufferDesc.RefreshRate.Numerator = 60;
Desc_.BufferDesc.RefreshRate.Denominator = 1;
swapChainDesc_.BufferDesc.ScanlineOrdering = DXGI_MODE_SCANLINE_ORDER_PROGRESSIVE;
Desc_.BufferDesc.Scaling = DXGI_MODE_SCALING_UNSPECIFIED;
Desc_.SwapEffect = DXGI_SWAP_EFFECT_DISCARD;
Desc_.Flags = 0;
this is initialization of DXGI swap chain

Related

Texture readback showing all 0s for DirectX 11

I am new to working with textures and DirectX and having issues reading back texture data from the GPU.
I am interested in reading back only a specific subset of my source texture. Also, I am trying to read it back at the least detailed miplevel (1x1 texture). Steps I follow:
Copy subregion of source texture into new texture
D3D11_TEXTURE2D_DESC desc = {0};
desc.Width = 1;
desc.Height = 1;
desc.MipLevels = 0;
desc.ArraySize = 1;
desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
desc.SampleDesc.Count = 1;
desc.SampleDesc.Quality = 0;
desc.Usage = D3D11_USAGE_DEFAULT;
desc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET;
desc.CPUAccessFlags = 0;
desc.MiscFlags = D3D11_RESOURCE_MISC_GENERATE_MIPS;
hr = pD3D11Device->CreateTexture2D(&desc, nullptr, &pSrcTexture);
D3D11_BOX srcRegion = {0};
srcRegion.left = 1000;
srcRegion.right = 1250;
srcRegion.top = 500;
srcRegion.bottom = 750;
srcRegion.front = 0;
srcRegion.back = 1;
pD3D11DeviceContext->CopySubresourceRegion(pSrcTexture, 0, 0, 0, 0, srcResource, 0, &srcRegion);
2. Create shader resource view and generate mipmaps for newly created texture
D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc = {0};
srvDesc.Format = desc.Format;
srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
srvDesc.Texture2D.MipLevels = -1;
srvDesc.Texture2D.MostDetailedMip = 0;
ID3D11ShaderResourceView* pShaderResourceView = nullptr;
hr = pD3D11Device->CreateShaderResourceView(pSrcTexture, &srvDesc, &pShaderResourceView);
pD3D11DeviceContext->GenerateMips(pShaderResourceView);
3. Copy into staging texture to be read back by CPU
D3D11_TEXTURE2D_DESC desc2 = {0};
desc2.Width = 1;
desc2.Height = 1;
desc2.MipLevels = 1;
desc2.ArraySize = 1;
desc2.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
desc2.SampleDesc.Count = 1;
desc2.SampleDesc.Quality = 0;
desc2.Usage = D3D11_USAGE_STAGING;
desc2.BindFlags = 0;
desc2.CPUAccessFlags = D3D11_CPU_ACCESS_READ;
desc2.MiscFlags = 0;
ID3D11Texture2D* pStagingTexture = nullptr;
hr = pD3D11Device->CreateTexture2D(&desc2, nullptr, &pStagingTexture);
pD3D11DeviceContext->CopyResource(pStagingTexture, pSrcTexture);
4. Map the subresource to access the underlying data, unmapping when finished
D3D11_MAPPED_SUBRESOURCE mappedResource = {0};
hr = pD3D11DeviceContext->Map(pStagingTexture, 0, D3D11_MAP_READ, 0, &mappedResource);
FLOAT* pTexels = (FLOAT*)mappedResource.pData;
std::cout << pTextels[0] << pTextels[1] << pTextels[2] << pTextels[3] << std::endl; // all zeros here
pD3D11DeviceContext->Unmap(pStagingTexture, 0);
Please note that none of my hr results are failing. Why is my texture data showing as all zeros?
Any guidance on how to resolve?
CopyResource, CopySubresourceRegion, and GenerateMips do not return a HRESULT, but it may have failed in any of those functions. A good way to determine that is to enable the Direct3D Debug Device to look for debug output. See this blog post and Microsoft Docs.
I suspect the problem is that you when called GenerateMips it didn't do anything because you provided a 1x1 texture as the starting place so it doesn't have any mips. I also don't see how you set up srcResource, but you are trying to copy using CopySubresourceRegion from a 250x250 texture region to a 1x1 texture which is going to fail as well.
You should take a look at DirectXTK and the DDSTextureLoader / WICTextureLoader modules in particular which implement auto-mip generation, and ScreenGrab which does read-back.
One minor note: = {0}; was a way to zero-fill structs back in VS 2013 or earlier. With C++11 conformant compilers (VS 2015 or later), just use = {}; as that does the zero-fill.

trying to copy pixal data from cpu to gpu using map every thing runs fine but the screen comes empty directx

in my TextureHendler class, I create an empty texture using this
D3D11_TEXTURE2D_DESC textureDesc = { 0 };
textureDesc.Width = textureWidth;
textureDesc.Height = textureHeight;
textureDesc.Format = dxgiFormat;
textureDesc.Usage = D3D11_USAGE_DYNAMIC;
textureDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
textureDesc.MiscFlags = 0;
textureDesc.MipLevels = 1;
textureDesc.ArraySize = 1;
textureDesc.SampleDesc.Count = 1;
textureDesc.SampleDesc.Quality = 0;
textureDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
hr = m_deviceResource->GetD3DDevice()->CreateTexture2D(
&textureDesc,
nullptr,
&texture);
and on runtime, I want to load data from CPU memory using the code below
Platform::Array<unsigned char>^ datapointer = GetPixelBytes();
static constexpr int BYTES_IN_RGBA_PIXEL = 4;
auto rowspan = BYTES_IN_RGBA_PIXEL * width;
D3D11_MAPPED_SUBRESOURCE ms;
auto device_context = m_deviceResource->GetD3DDeviceContext();
device_context->Map(m_texture->GetTexture2d(), 0, D3D11_MAP_WRITE_DISCARD, 0, &ms);
uint8_t* mappedData = reinterpret_cast<uint8_t*>(ms.pData);
for (int i = 0; i < datapointer->Length; i++)
{
mappedData[i] = datapointer[i];
}
device_context->Unmap(m_texture->GetTexture2d(), 0);
everything runs fine but the output screen comes black
Update :
std::shared_ptr<TextureHendler> m_texture;
TextureHendler Holds
public:
ID3D11Texture2D* GetTexture2d() { return texture.Get(); }
private:
Microsoft::WRL::ComPtr<ID3D11Texture2D> texture;
and a load function which content is shown above
here are the sample code https://github.com/AbhishekSharma-SEG/Demo_DXPlayer thanks for the help

How to stop Direct3D 11 from stretching fullscreen to monitor size?

I am trying to make my Direct3D window fullscreen, with an 800x600 resolution. However, everything I try makes the screen stretch to cover the entire monitor, instead of just taking up an 800x600 area with black bars on the sides. The cursor is also stretched.
I create my swap chain with this DXGI_SWAP_CHAIN_DESC:
DXGI_SWAP_CHAIN_DESC sd{};
sd.BufferDesc.Width = 0;
sd.BufferDesc.Height = 0;
sd.BufferDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
sd.BufferDesc.RefreshRate.Numerator = 0;
sd.BufferDesc.RefreshRate.Denominator = 0;
sd.BufferDesc.Scaling = DXGI_MODE_SCALING_UNSPECIFIED;
sd.BufferDesc.ScanlineOrdering = DXGI_MODE_SCANLINE_ORDER_UNSPECIFIED;
sd.SampleDesc.Count = 1;
sd.SampleDesc.Quality = 0;
sd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
sd.BufferCount = 1;
sd.OutputWindow = hWnd;
sd.Windowed = TRUE;
sd.SwapEffect = DXGI_SWAP_EFFECT_DISCARD;
sd.Flags = 0;
Then I set the screen resolution using this code:
DEVMODEW devMode{};
devMode.dmSize = sizeof(devMode);
devMode.dmPelsWidth = width;
devMode.dmPelsHeight = height;
devMode.dmFields = DM_PELSHEIGHT | DM_PELSWIDTH;
LONG res = ChangeDisplaySettingsW(&devMode, CDS_FULLSCREEN);
Finally I set the swap chain to be fullscreen:
HRESULT hr = swap->SetFullscreenState(on, nullptr);
Also, in my window procedure I call this whenever WM_SIZE is received:
swapChain.reset(); // Destroy swap chain
context->ClearState();
if (FAILED(swap->ResizeBuffers(0, 0, 0, DXGI_FORMAT_UNKNOWN, 0))) throw Exception("Failed to resize buffers");
swapChain.emplace(swap.Get(), device.Get(), context.Get(), hWnd); // Recreate swap chain
I have tried using DXGI_SWAP_CHAIN_FLAG_ALLOW_MODE_SWITCH in both the DXGI_SWAP_CHAIN_DESC and the ResizeBuffers, as well as calling IDXGISwapChain::ResizeTarget with my desired size, but I still get the same problem.

DirectX cuts Vertex and only draws last call

This is what happens in DirectX:
What it should do is display 5 of those birds. It only does one (the last one) and also not correctly.
And this is how it really should look like (same buffers etc., but done in openGL):
So any idea what could cause the problem?
My calls are:
Initialize:
this->device->QueryInterface(__uuidof(IDXGIDevice), (LPVOID*)&dxgiDevice);
dxgiDevice->GetAdapter(&adapter);
adapter->GetParent(IID_PPV_ARGS(&factory));
ZeroMemory(&swapChainDescription, sizeof(swapChainDescription));
swapChainDescription.BufferDesc.Width = this->description.ResolutionWidth;
swapChainDescription.BufferDesc.Height = this->description.ResolutionHeight;
swapChainDescription.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
swapChainDescription.BufferDesc.RefreshRate.Numerator = 60;
swapChainDescription.BufferDesc.RefreshRate.Denominator = 1;
swapChainDescription.BufferDesc.Scaling = DXGI_MODE_SCALING_STRETCHED;
swapChainDescription.BufferDesc.ScanlineOrdering = DXGI_MODE_SCANLINE_ORDER_PROGRESSIVE;
swapChainDescription.SampleDesc.Count = 1;
swapChainDescription.SampleDesc.Quality = 0;
swapChainDescription.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
swapChainDescription.BufferCount = 1;
swapChainDescription.OutputWindow = this->hwnd;
swapChainDescription.Windowed = !this->window->GetDescription().Fullscreen;
swapChainDescription.SwapEffect = DXGI_SWAP_EFFECT_DISCARD;
swapChainDescription.Flags = DXGI_SWAP_CHAIN_FLAG_ALLOW_MODE_SWITCH;
factory->CreateSwapChain(this->device, &swapChainDescription, &this->swapChain);
this->swapChain->GetBuffer(0, __uuidof(ID3D10Texture2D), (LPVOID*)&backBuffer);
this->device->CreateRenderTargetView(backBuffer, NULL, &this->backBufferView);
backBuffer->Release();
ZeroMemory(&depthStencilDescription, sizeof(depthStencilDescription));
depthStencilDescription.Width = this->description.ResolutionWidth;
depthStencilDescription.Height = this->description.ResolutionHeight;
depthStencilDescription.MipLevels = 1;
depthStencilDescription.ArraySize = 1;
depthStencilDescription.Format = DXGI_FORMAT_D24_UNORM_S8_UINT;
depthStencilDescription.SampleDesc.Count = 1;
depthStencilDescription.SampleDesc.Quality = 0;
depthStencilDescription.Usage = D3D10_USAGE_DEFAULT;
depthStencilDescription.BindFlags = D3D10_BIND_DEPTH_STENCIL;
depthStencilDescription.CPUAccessFlags = 0;
depthStencilDescription.MiscFlags = 0;
this->device->CreateTexture2D(&depthStencilDescription, 0, &depthStencilBuffer);
this->device->CreateDepthStencilView(depthStencilBuffer, 0, &this->depthStencilBufferView);
depthStencilBuffer->Release();
viewPort.Width = this->description.ResolutionWidth;
viewPort.Height = this->description.ResolutionHeight;
viewPort.MinDepth = 0.0f;
viewPort.MaxDepth = 1.0f;
viewPort.TopLeftX = 0;
viewPort.TopLeftY = 0;
this->device->RSSetViewports(1, &viewPort);
D3D10_BLEND_DESC BlendState;
ZeroMemory(&BlendState, sizeof(D3D10_BLEND_DESC));
BlendState.AlphaToCoverageEnable = FALSE;
BlendState.BlendEnable[0] = TRUE;
BlendState.SrcBlend = D3D10_BLEND_SRC_ALPHA;
BlendState.DestBlend = D3D10_BLEND_INV_SRC_ALPHA;
BlendState.BlendOp = D3D10_BLEND_OP_ADD;
BlendState.SrcBlendAlpha = D3D10_BLEND_ZERO;
BlendState.DestBlendAlpha = D3D10_BLEND_ZERO;
BlendState.BlendOpAlpha = D3D10_BLEND_OP_ADD;
BlendState.RenderTargetWriteMask[0] = D3D10_COLOR_WRITE_ENABLE_ALL;
this->device->CreateBlendState(&BlendState, &this->blendState);
Each Frame (broken down to):
this->device->OMSetBlendState(this->blendState, 0, 0xffffffff);
this->device->OMSetRenderTargets(1, &this->backBufferView, this->depthStencilBufferView);
this->device->ClearRenderTargetView(this->backBufferView, &Color.colors[0]);
this->device->ClearDepthStencilView(this->depthStencilBufferView, D3D10_CLEAR_DEPTH |
D3D10_CLEAR_STENCIL, 1.0f, 0);
in for loop (5 times):
ID3D10Buffer* buff = (ID3D10Buffer*)buffer->GetData();
UINT stride = buffer->GetStride();
UINT offset = 0;
this->device->IASetVertexBuffers(0, 1, &buff, &stride, &offset);
ID3D10Buffer* buff = (ID3D10Buffer*)buffer->GetData();
this->device->IASetIndexBuffer(buff, DXGI_FORMAT_R32_UINT, 0);
this->device->IASetPrimitiveTopology(D3D10_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
this->device->DrawIndexed(Indexes, 0, 0);
At the End
this->swapChain->Present(0, 0);
I got it fixed, here for others:
Vertices getting cut:
my near/far plane were to near to eachother. (they were 0.1f and 100.0f) Changed both values and now they work.
DirectX only drawing last call:
Had to apply material again for each model.
So doing
// Set coresponding input layout
this->device->IASetInputLayout(this->inputLayout);
// Apply specified pass of current technique
this->technique->GetPassByIndex(0)->Apply(0);

I am trying to set a chess board texture for a square

I am trying to set a chess board texture with green and blue texels for a square.
Instead my square is black.
I don't get any compile or runtime errors.
Is there any problem with my texture code?
vector<unsigned char> texelsBuffer;
for(int i = 0; i < N; ++i)
for(int j = 0; j < N; ++j)
{
texelsBuffer.push_back(0.0f);
if((i+j) % 2 == 0) {
texelsBuffer.push_back(1.0f);
texelsBuffer.push_back(0.0f);
}
else {
texelsBuffer.push_back(0.0f);
texelsBuffer.push_back(1.0f);
}
texelsBuffer.push_back(1.0f);
}
ID3D11Texture2D* tex = 0;
D3D11_TEXTURE2D_DESC textureDescr;
ZeroMemory(&textureDescr, sizeof(D3D11_TEXTURE2D_DESC));
textureDescr.ArraySize = 1;
textureDescr.BindFlags = D3D11_BIND_SHADER_RESOURCE;
textureDescr.CPUAccessFlags = 0;
textureDescr.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
textureDescr.Height = N;
textureDescr.Width = N;
textureDescr.MipLevels = 1;
textureDescr.MiscFlags = 0;
textureDescr.SampleDesc.Count = 1;
textureDescr.SampleDesc.Quality = 0;
textureDescr.Usage = D3D11_USAGE_IMMUTABLE;
D3D11_SUBRESOURCE_DATA texInitData;
ZeroMemory(&texInitData, sizeof(D3D11_SUBRESOURCE_DATA));
texInitData.pSysMem = (void*)&texelsBuffer[0];
texInitData.SysMemPitch = sizeof(unsigned char) * N;
texInitData.SysMemSlicePitch = 0;
g_pd3dDevice->CreateTexture2D(&textureDescr, &texInitData, &tex);
g_pd3dDevice->CreateShaderResourceView(tex, NULL, &g_pTextureRV);
Or my sampler code?
D3D11_SAMPLER_DESC sampDesc;
ZeroMemory( &sampDesc, sizeof(sampDesc) );
sampDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR;
sampDesc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP;
sampDesc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP;
sampDesc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP;
sampDesc.ComparisonFunc = D3D11_COMPARISON_NEVER;
sampDesc.MinLOD = 0;
sampDesc.MaxLOD = D3D11_FLOAT32_MAX;
g_pd3dDevice->CreateSamplerState(&sampDesc, &g_pSamplerLinear);
You're populating texelsBuffer with floating-point values. Use 255 or 0xFF to set the channel to 1.0 in UNORM encoding.
You should also consider making your chess squares larger than one texel; unless you are always going to be rendering the texture at a 1:1 pixel:texel ratio, you will end up blending between the two colors for most sample locations.