How to stop Direct3D 11 from stretching fullscreen to monitor size? - c++

I am trying to make my Direct3D window fullscreen, with an 800x600 resolution. However, everything I try makes the screen stretch to cover the entire monitor, instead of just taking up an 800x600 area with black bars on the sides. The cursor is also stretched.
I create my swap chain with this DXGI_SWAP_CHAIN_DESC:
DXGI_SWAP_CHAIN_DESC sd{};
sd.BufferDesc.Width = 0;
sd.BufferDesc.Height = 0;
sd.BufferDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
sd.BufferDesc.RefreshRate.Numerator = 0;
sd.BufferDesc.RefreshRate.Denominator = 0;
sd.BufferDesc.Scaling = DXGI_MODE_SCALING_UNSPECIFIED;
sd.BufferDesc.ScanlineOrdering = DXGI_MODE_SCANLINE_ORDER_UNSPECIFIED;
sd.SampleDesc.Count = 1;
sd.SampleDesc.Quality = 0;
sd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
sd.BufferCount = 1;
sd.OutputWindow = hWnd;
sd.Windowed = TRUE;
sd.SwapEffect = DXGI_SWAP_EFFECT_DISCARD;
sd.Flags = 0;
Then I set the screen resolution using this code:
DEVMODEW devMode{};
devMode.dmSize = sizeof(devMode);
devMode.dmPelsWidth = width;
devMode.dmPelsHeight = height;
devMode.dmFields = DM_PELSHEIGHT | DM_PELSWIDTH;
LONG res = ChangeDisplaySettingsW(&devMode, CDS_FULLSCREEN);
Finally I set the swap chain to be fullscreen:
HRESULT hr = swap->SetFullscreenState(on, nullptr);
Also, in my window procedure I call this whenever WM_SIZE is received:
swapChain.reset(); // Destroy swap chain
context->ClearState();
if (FAILED(swap->ResizeBuffers(0, 0, 0, DXGI_FORMAT_UNKNOWN, 0))) throw Exception("Failed to resize buffers");
swapChain.emplace(swap.Get(), device.Get(), context.Get(), hWnd); // Recreate swap chain
I have tried using DXGI_SWAP_CHAIN_FLAG_ALLOW_MODE_SWITCH in both the DXGI_SWAP_CHAIN_DESC and the ResizeBuffers, as well as calling IDXGISwapChain::ResizeTarget with my desired size, but I still get the same problem.

Related

Texture readback showing all 0s for DirectX 11

I am new to working with textures and DirectX and having issues reading back texture data from the GPU.
I am interested in reading back only a specific subset of my source texture. Also, I am trying to read it back at the least detailed miplevel (1x1 texture). Steps I follow:
Copy subregion of source texture into new texture
D3D11_TEXTURE2D_DESC desc = {0};
desc.Width = 1;
desc.Height = 1;
desc.MipLevels = 0;
desc.ArraySize = 1;
desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
desc.SampleDesc.Count = 1;
desc.SampleDesc.Quality = 0;
desc.Usage = D3D11_USAGE_DEFAULT;
desc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET;
desc.CPUAccessFlags = 0;
desc.MiscFlags = D3D11_RESOURCE_MISC_GENERATE_MIPS;
hr = pD3D11Device->CreateTexture2D(&desc, nullptr, &pSrcTexture);
D3D11_BOX srcRegion = {0};
srcRegion.left = 1000;
srcRegion.right = 1250;
srcRegion.top = 500;
srcRegion.bottom = 750;
srcRegion.front = 0;
srcRegion.back = 1;
pD3D11DeviceContext->CopySubresourceRegion(pSrcTexture, 0, 0, 0, 0, srcResource, 0, &srcRegion);
2. Create shader resource view and generate mipmaps for newly created texture
D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc = {0};
srvDesc.Format = desc.Format;
srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
srvDesc.Texture2D.MipLevels = -1;
srvDesc.Texture2D.MostDetailedMip = 0;
ID3D11ShaderResourceView* pShaderResourceView = nullptr;
hr = pD3D11Device->CreateShaderResourceView(pSrcTexture, &srvDesc, &pShaderResourceView);
pD3D11DeviceContext->GenerateMips(pShaderResourceView);
3. Copy into staging texture to be read back by CPU
D3D11_TEXTURE2D_DESC desc2 = {0};
desc2.Width = 1;
desc2.Height = 1;
desc2.MipLevels = 1;
desc2.ArraySize = 1;
desc2.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
desc2.SampleDesc.Count = 1;
desc2.SampleDesc.Quality = 0;
desc2.Usage = D3D11_USAGE_STAGING;
desc2.BindFlags = 0;
desc2.CPUAccessFlags = D3D11_CPU_ACCESS_READ;
desc2.MiscFlags = 0;
ID3D11Texture2D* pStagingTexture = nullptr;
hr = pD3D11Device->CreateTexture2D(&desc2, nullptr, &pStagingTexture);
pD3D11DeviceContext->CopyResource(pStagingTexture, pSrcTexture);
4. Map the subresource to access the underlying data, unmapping when finished
D3D11_MAPPED_SUBRESOURCE mappedResource = {0};
hr = pD3D11DeviceContext->Map(pStagingTexture, 0, D3D11_MAP_READ, 0, &mappedResource);
FLOAT* pTexels = (FLOAT*)mappedResource.pData;
std::cout << pTextels[0] << pTextels[1] << pTextels[2] << pTextels[3] << std::endl; // all zeros here
pD3D11DeviceContext->Unmap(pStagingTexture, 0);
Please note that none of my hr results are failing. Why is my texture data showing as all zeros?
Any guidance on how to resolve?
CopyResource, CopySubresourceRegion, and GenerateMips do not return a HRESULT, but it may have failed in any of those functions. A good way to determine that is to enable the Direct3D Debug Device to look for debug output. See this blog post and Microsoft Docs.
I suspect the problem is that you when called GenerateMips it didn't do anything because you provided a 1x1 texture as the starting place so it doesn't have any mips. I also don't see how you set up srcResource, but you are trying to copy using CopySubresourceRegion from a 250x250 texture region to a 1x1 texture which is going to fail as well.
You should take a look at DirectXTK and the DDSTextureLoader / WICTextureLoader modules in particular which implement auto-mip generation, and ScreenGrab which does read-back.
One minor note: = {0}; was a way to zero-fill structs back in VS 2013 or earlier. With C++11 conformant compilers (VS 2015 or later), just use = {}; as that does the zero-fill.

DirectX 10 swap chain scaling

I want to disable swap chain scaling on windows.
I'm rendering 1600x900 quad on the top of window (1366x768) but it's streched.
Desc_.BufferCount = 1;
Desc_.OutputWindow = windowWnd;
Desc_.Windowed = true;
Desc_.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
Desc_.BufferDesc.Height = height;
Desc_.BufferDesc.Width = width;
Desc_.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
Desc_.SampleDesc.Count = 1;
Desc_.SampleDesc.Quality = 0;
Desc_.BufferDesc.RefreshRate.Numerator = 60;
Desc_.BufferDesc.RefreshRate.Denominator = 1;
swapChainDesc_.BufferDesc.ScanlineOrdering = DXGI_MODE_SCANLINE_ORDER_PROGRESSIVE;
Desc_.BufferDesc.Scaling = DXGI_MODE_SCALING_UNSPECIFIED;
Desc_.SwapEffect = DXGI_SWAP_EFFECT_DISCARD;
Desc_.Flags = 0;
this is initialization of DXGI swap chain

`E_FAIL` when creating DirectX 10 Device and Swap Chain - _com_error

I am working though some simple DX tutorials, and have hit an early snag. I am working on both an old laptop and a new PC, so I'm using d3d10_1.lib which lets me use a 9 feature set. The PC, however, does support all the way to DX11, so nothing should be a problem on there.
So here's the function where it fails:
bool DirectX9Renderer::Initialise(HWND* handle)
{
//window handle
hWnd = handle;
//get window dimensions
RECT rc;
GetClientRect( *hWnd, &rc );
UINT width = rc.right - rc.left;
UINT height = rc.bottom - rc.top;
DXGI_SWAP_CHAIN_DESC swapChainDesc;
ZeroMemory(&swapChainDesc, sizeof(swapChainDesc));
//set buffer dimensions and format
swapChainDesc.BufferCount = 2;
swapChainDesc.BufferDesc.Width = width;
swapChainDesc.BufferDesc.Height = height;
swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
swapChainDesc.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;;
//set refresh rate
swapChainDesc.BufferDesc.RefreshRate.Numerator = 60;
swapChainDesc.BufferDesc.RefreshRate.Denominator = 1;
//sampling settings
swapChainDesc.SampleDesc.Quality = 0;
swapChainDesc.SampleDesc.Count = 1;
//output window handle
swapChainDesc.OutputWindow = *hWnd;
swapChainDesc.Windowed = true;
HRESULT result = D3D10CreateDeviceAndSwapChain1( // this is line 57
NULL,
D3D10_DRIVER_TYPE_HARDWARE,
NULL,
D3D10_CREATE_DEVICE_SINGLETHREADED | D3D10_CREATE_DEVICE_DEBUG,
D3D10_FEATURE_LEVEL_9_1,
D3D10_1_SDK_VERSION,
&swapChainDesc,
&pSwapChain,
&pD3DDevice
);
if(FAILED(result))
{
return FatalError("D3D device creation failed");
}
// there's more stuff after this, but I don't get that far
}
So the call to D3D10CreateDeviceAndSwapChain1 fails with the less-helpful error code E_FAIL.
There is a line in the Debug output too:
First-chance exception at 0x770f56c4 in TileTest.exe: Microsoft C++ exception: _com_error at memory location 0x00b6e8d4..
I have tried using D3D10_DRIVER_TYPE_REFERENCE and different D3D10_FEATURE_LEVEL_xx values, but it doesn't seem to work.
I think the problem may have been to do with the D3D10_CREATE_DEVICE_FLAG I sent in. I changed the D3D10_CREATE_DEVICE_SINGLETHREADED | D3D10_CREATE_DEVICE_DEBUG to 0 and it now works.
I tried to create the device inside a VMware virtual machine. It failed (device stayed NULL) until I changed the requested FEATURE_LEVEL from D3D10_FEATURE_LEVEL_10_1 to D3D10_FEATURE_LEVEL_9_3. I've heard this also helps other PCs with real hardware.

Directx 11 Bitblt Alternative

I have the following function which I am trying to integrate into my directx 11 application. When I am using directx9 everything works fine but when converting to directx 11 I am getting a blue screen of death error at the Bitblt line (I must be doing something wrong with the HDC's?). I was wondering what the best way to convert this code to directx 11 compatible surfaces instead of HDC's would be.
Here is the function:
void CFlashDXPlayer::DrawFrame(HDC dc)
{
if (m_dirtyFlag)
{
IViewObject* pViewObject = NULL;
m_flashInterface->QueryInterface(IID_IViewObject, (LPVOID*) &pViewObject);
if (pViewObject != NULL)
{
// Combine regions
HRGN unionRgn, first, second = NULL;
unionRgn = CreateRectRgnIndirect(&m_dirtyRects[0]);
if (m_dirtyRects.size() >= 2)
second = CreateRectRgn(0, 0, 1, 1);
for (std::vector<RECT>::iterator it = m_dirtyRects.begin() + 1; it != m_dirtyRects.end(); ++it)
{
// Fill combined region
first = unionRgn;
SetRectRgn(second, it->left, it->top, it->right, it->bottom);
unionRgn = CreateRectRgn(0, 0, 1, 1);
CombineRgn(unionRgn, first, second, RGN_OR);
DeleteObject(first);
}
if (second)
DeleteObject(second);
RECT clipRgnRect; GetRgnBox(unionRgn, &clipRgnRect);
RECTL clipRect = { 0, 0, m_width, m_height };
// Fill background
if (m_transpMode != TMODE_FULL_ALPHA)
{
// Set clip region
SelectClipRgn(dc, unionRgn);
COLORREF fillColor = GetBackgroundColor();
HBRUSH fillColorBrush = CreateSolidBrush(fillColor);
FillRgn(dc, unionRgn, fillColorBrush);
DeleteObject(fillColorBrush);
// Draw to main buffer
HRESULT hr = pViewObject->Draw(DVASPECT_TRANSPARENT, 1, NULL, NULL, NULL, dc, &clipRect, &clipRect, NULL, 0);
assert(SUCCEEDED(hr));
}
else
{
if (m_alphaBlackDC == NULL)
{
// Create memory buffers
BITMAPINFOHEADER bih = {0};
bih.biSize = sizeof(BITMAPINFOHEADER);
bih.biBitCount = 32;
bih.biCompression = BI_RGB;
bih.biPlanes = 1;
bih.biWidth = LONG(m_width);
bih.biHeight = -LONG(m_height);
m_alphaBlackDC = CreateCompatibleDC(dc);
m_alphaBlackBitmap = CreateDIBSection(m_alphaBlackDC, (BITMAPINFO*)&bih, DIB_RGB_COLORS, (void**)&m_alphaBlackBuffer, 0, 0);
SelectObject(m_alphaBlackDC, m_alphaBlackBitmap);
m_alphaWhiteDC = CreateCompatibleDC(dc);
m_alphaWhiteBitmap = CreateDIBSection(m_alphaWhiteDC, (BITMAPINFO*)&bih, DIB_RGB_COLORS, (void**)&m_alphaWhiteBuffer, 0, 0);
SelectObject(m_alphaWhiteDC, m_alphaWhiteBitmap);
}
HRESULT hr;
HBRUSH fillColorBrush;
// Render frame twice - against white and against black background to calculate alpha
SelectClipRgn(m_alphaBlackDC, unionRgn);
COLORREF blackColor = 0x00000000;
fillColorBrush = CreateSolidBrush(blackColor);
FillRgn(m_alphaBlackDC, unionRgn, fillColorBrush);
DeleteObject(fillColorBrush);
hr = pViewObject->Draw(DVASPECT_TRANSPARENT, 1, NULL, NULL, NULL, m_alphaBlackDC, &clipRect, &clipRect, NULL, 0);
assert(SUCCEEDED(hr));
// White background
SelectClipRgn(m_alphaWhiteDC, unionRgn);
COLORREF whiteColor = 0x00FFFFFF;
fillColorBrush = CreateSolidBrush(whiteColor);
FillRgn(m_alphaWhiteDC, unionRgn, fillColorBrush);
DeleteObject(fillColorBrush);
hr = pViewObject->Draw(DVASPECT_TRANSPARENT, 1, NULL, NULL, NULL, m_alphaWhiteDC, &clipRect, &clipRect, NULL, 0);
assert(SUCCEEDED(hr));
// Combine alpha
for (LONG y = clipRgnRect.top; y < clipRgnRect.bottom; ++y)
{
int offset = y * m_width * 4 + clipRgnRect.left * 4;
for (LONG x = clipRgnRect.left; x < clipRgnRect.right; ++x)
{
BYTE blackRed = m_alphaBlackBuffer[offset];
BYTE whiteRed = m_alphaWhiteBuffer[offset];
m_alphaBlackBuffer[offset + 3] = 255 - (whiteRed - blackRed);
offset += 4;
}
}
// Blit result to target DC
BitBlt(dc, clipRgnRect.left, clipRgnRect.top,
clipRgnRect.right - clipRgnRect.left,
clipRgnRect.bottom - clipRgnRect.top,
m_alphaBlackDC, clipRgnRect.left, clipRgnRect.top, SRCCOPY);
}
DeleteObject(unionRgn);
pViewObject->Release();
}
m_dirtyFlag = false;
m_dirtyRects.clear();
m_dirtyUnionRect.left = m_dirtyUnionRect.top = LONG_MAX;
m_dirtyUnionRect.right = m_dirtyUnionRect.bottom = -LONG_MAX;
}
}
The HDC I am passing to this function is created in the following manner:
D3D11_TEXTURE2D_DESC textureDesc;
ZeroMemory(&textureDesc, sizeof(textureDesc));
textureDesc.Width = width;
textureDesc.Height = height;
textureDesc.MipLevels = 1;
textureDesc.ArraySize = 1;
textureDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
textureDesc.SampleDesc.Count = 1;
textureDesc.Usage = D3D11_USAGE_DEFAULT;
textureDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET;
textureDesc.MiscFlags = D3D11_RESOURCE_MISC_GDI_COMPATIBLE;
HRESULT hr = device->CreateTexture2D(&textureDesc, NULL, &m_flashTexture);
HRESULT hResult;
HDC hDC;
IDXGISurface1 *pSurface = NULL;
hResult = m_flashTexture->QueryInterface(__uuidof(IDXGISurface1), (void**)&pSurface);
hResult = pSurface->GetDC(TRUE, &hDC);
assert(SUCCEEDED(hResult));
m_flashPlayer->DrawFrame(hDC);
Any ideas of what I am doing wrong? I can't seem to figure out what is going on and why this is casuing a blue screen when if I use Directx 9 obejcts it doesn't. Is there a better way to do this?
(Also I've tried updating my drivers and they are all up to date).
Thank you for the help.
Turns out that this was indeed a driver issue. It works without a problem when I run with my graphics card set to the radeon in my latop, but when I have it on switchable for some reason it still crashes even though it should be selecting my radeon. I need to have the graphics mode fixed. Weird, but atleast its not my program I guess.
Can't really tell from code inspection. I haven't noticed anything blatantly wrong. There certainly should not be a any BSOD - that part is a driver bug. What hardware/driver are you running on?
A common reason for driver crashes though is illegally writing to some memory area, often if you're blitting to outside of your DC memory. I'd double check to verify that your regions are not out of bounds and that m_alphaBlackDC is the same size as dc.
I would also highly, highly recommend testing on another non-related GPU (that doesn't share the same hardware architecture).

D3D10CreateDeviceAndSwapChain() always failing with DXGI_ERROR_INVALID_CALL

I'm creating a hidden window and I'm looking to getting a pointer to IDXGISwapChain::Present(). The problem is that I can't get a valid Direct3D10 device, nor a valid swap chain.
HWND hwnd = CreateWindow(TEXT("flhiSTATIC"), TEXT("flh DXGI Window"), WS_OVERLAPPEDWINDOW, CW_USEDEFAULT, CW_USEDEFAULT, 640, 480, 0, NULL, NULL, 0);
DXGI_SWAP_CHAIN_DESC scd;
ZeroMemory(&scd, sizeof(scd));
scd.BufferCount = 2;
RECT rcWnd;
GetClientRect(hwnd, &rcWnd);
scd.BufferDesc.Width = rcWnd.right - rcWnd.left;
scd.BufferDesc.Height = rcWnd.bottom - rcWnd.top;
scd.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; // also tried DXGI_FORMAT_R8G8B8A8_UNORM_SRGB
scd.BufferDesc.ScanlineOrdering = DXGI_MODE_SCANLINE_ORDER_UNSPECIFIED;
scd.BufferDesc.Scaling = DXGI_MODE_SCALING_CENTERED;
scd.BufferDesc.RefreshRate.Numerator = 60;
scd.BufferDesc.RefreshRate.Denominator = 1;
scd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
scd.OutputWindow = hwnd;
scd.SampleDesc.Count = 1;
scd.SampleDesc.Quality = 0;
scd.Windowed = TRUE;
scd.SwapEffect = DXGI_SWAP_EFFECT_DISCARD;
pD3D10CreateDeviceAndSwapChain = reinterpret_cast<D3D10CREATEDEVICEANDSWAPCHAIN_PROC *>(GetProcAddress(d3d10, "D3D10CreateDeviceAndSwapChain"));
HRESULT hr = pD3D10CreateDeviceAndSwapChain(NULL /*pAdapter*/, D3D10_DRIVER_TYPE_HARDWARE, NULL, D3D10_CREATE_DEVICE_DEBUG, D3D10_SDK_VERSION, &scd, &pSwapChain, &pDev);
// this guy always fails with 0 in both pSwapChain and pDev...
Any idea what might be wrong with the above code?
I completely forgot about the need to create or assign a pre-existing window class :(
That was the problem; lesson learned - always check the return codes of all the calls.
edit
I completely forgot to call CreateWindow with a valid window class, either one I'd previously registered or one that's registered by someone within the current module.