Direct3D11 Screenshot crash - c++

I'm trying to get, basically, screenshot (each 1 second, without saving) of Direct3D11 application. Code works fine on my PC(Intel CPU, Radeon GPU) but crashes after few iterations on 2 others (Intel CPU + Intel integrated GPU, Intel CPU + Nvidia GPU).
void extractBitmap(void* texture) {
if (texture) {
ID3D11Texture2D* d3dtex = (ID3D11Texture2D*)texture;
ID3D11Texture2D* pNewTexture = NULL;
D3D11_TEXTURE2D_DESC desc;
d3dtex->GetDesc(&desc);
desc.BindFlags = 0;
desc.CPUAccessFlags = D3D11_CPU_ACCESS_READ | D3D11_CPU_ACCESS_WRITE;
desc.Usage = D3D11_USAGE_STAGING;
desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM_SRGB;
HRESULT hRes = D3D11Device->CreateTexture2D(&desc, NULL, &pNewTexture);
if (FAILED(hRes)) {
printCon(std::string("CreateTexture2D FAILED:" + format_error(hRes)).c_str());
if (hRes == DXGI_ERROR_DEVICE_REMOVED)
printCon(std::string("DXGI_ERROR_DEVICE_REMOVED -- " + format_error(D3D11Device->GetDeviceRemovedReason())).c_str());
}
else {
if (pNewTexture) {
D3D11DeviceContext->CopyResource(pNewTexture, d3dtex);
// Wokring with texture
pNewTexture->Release();
}
}
}
return;
}
D3D11SwapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), reinterpret_cast< void** >(&pBackBuffer));
extractBitmap(pBackBuffer);
pBackBuffer->Release();
Crash log:
CreateTexture2D FAILED:887a0005
DXGI_ERROR_DEVICE_REMOVED -- 887a0020
Once I comment out D3D11DeviceContext->CopyResource(pNewTexture, d3dtex); code works fine on all 3 PC's.

Got it wokring. I was running my code in separate thread. After I moved it to hooked Present(), application did not crash.

Related

Retrieving ID3D11Texture2D data to be sent over network

I am modifying the desktop duplication api sample kindly provided by Microsoft to capture the screen and send updates over the network to my application. I know how to actually send the data; my problem is getting the data from the ID3D11Texture2D object.
ID3D11Texture2D* m_AcquiredDesktopImage;
IDXGIResource* desktopResource = nullptr;
DXGI_OUTDUPL_FRAME_INFO FrameInfo;
// Get new frame
HRESULT hr = m_DeskDupl->AcquireNextFrame(500, &FrameInfo, &desktopResource);
// QI for IDXGIResource
hr = desktopResource->QueryInterface(__uuidof(ID3D11Texture2D), reinterpret_cast<void **>(&m_AcquiredDesktopImage));
At this point, I think the screen updates are in m_AcquiredDesktopImage. I need to transmit this data over the wire (as efficiently as possible).
This answer seems to be on the right track, but I'm new to Windows programming, so I need some additional help.
This is the only solution I can imagine utilizing IDXGIObject::GetPrivateData
Private Data are not what you are looking for at all. They are only here to attach custom values to d3d objects.
Once you have the ID3D11Texture2D object you need to read back the image from, you need to create a second one in the staging memory pool from the ID3D11Device (get the original description, change the pool, and remove the binding).
Then, you need to use the ID3D11DeviceContext to copy the texture to your staging one using CopyResource. Then you can use the context Map and Unmap api to read the image.
I got a good link which does that.. Look for the method SaveTextureToBmp
[...]
// map the texture
ComPtr<ID3D11Texture2D> mappedTexture;
D3D11_MAPPED_SUBRESOURCE mapInfo;
mapInfo.RowPitch;
hr = d3dContext->Map(
Texture,
0, // Subresource
D3D11_MAP_READ,
0, // MapFlags
&mapInfo);
if (FAILED(hr)) {
// If we failed to map the texture, copy it to a staging resource
if (hr == E_INVALIDARG) {
D3D11_TEXTURE2D_DESC desc2;
desc2.Width = desc.Width;
desc2.Height = desc.Height;
desc2.MipLevels = desc.MipLevels;
desc2.ArraySize = desc.ArraySize;
desc2.Format = desc.Format;
desc2.SampleDesc = desc.SampleDesc;
desc2.Usage = D3D11_USAGE_STAGING;
desc2.BindFlags = 0;
desc2.CPUAccessFlags = D3D11_CPU_ACCESS_READ;
desc2.MiscFlags = 0;
ComPtr<ID3D11Texture2D> stagingTexture;
hr = d3dDevice->CreateTexture2D(&desc2, nullptr, &stagingTexture);
if (FAILED(hr)) {
throw MyException::Make(hr, L"Failed to create staging texture");
}
// copy the texture to a staging resource
d3dContext->CopyResource(stagingTexture.Get(), Texture);
// now, map the staging resource
hr = d3dContext->Map(
stagingTexture.Get(),
0,
D3D11_MAP_READ,
0,
&mapInfo);
if (FAILED(hr)) {
throw MyException::Make(hr, L"Failed to map staging texture");
}
mappedTexture = std::move(stagingTexture);
} else {
throw MyException::Make(hr, L"Failed to map texture.");
}
} else {
mappedTexture = Texture;
}
auto unmapResource = Finally([&] {
d3dContext->Unmap(mappedTexture.Get(), 0);
});
[...]
hr = frameEncode->WritePixels(
desc.Height,
mapInfo.RowPitch,
desc.Height * mapInfo.RowPitch,
reinterpret_cast<BYTE*>(mapInfo.pData));
if (FAILED(hr)) {
throw MyException::Make(hr, L"frameEncode->WritePixels(...) failed.");
}

DXGI_FORMAT_YUY2 textures return different RowPitch under Windows 8.1 and Windows 10

My build environment is as follows:
Windows 8.1, VS2012, desktop application built using windows 8.0 SDK and C++.
When I run my program on windows 8.1 the RowPitch prints 2560 but under windows 10 the same program prints 5120.
What am I doing wrong here?
Here is the code, Thanks for all the replies.
#include <d3d11.h>
static bool init_directx11(ID3D11Device **pDevice, ID3D11DeviceContext **pDeviceContext)
{
D3D_FEATURE_LEVEL featureLevels[] = {D3D_FEATURE_LEVEL_11_0, D3D_FEATURE_LEVEL_10_1, D3D_FEATURE_LEVEL_10_0, D3D_FEATURE_LEVEL_9_1};
D3D_FEATURE_LEVEL featureLevel;
UINT createDeviceFlags = D3D11_CREATE_DEVICE_BGRA_SUPPORT;
HRESULT hr = D3D11CreateDevice(NULL, D3D_DRIVER_TYPE_HARDWARE, NULL, createDeviceFlags, featureLevels, ARRAYSIZE(featureLevels), D3D11_SDK_VERSION, pDevice,
&featureLevel, pDeviceContext);
return SUCCEEDED(hr);
}
int _tmain(int argc, _TCHAR* argv[])
{
ID3D11Device *pDevice = nullptr;
ID3D11DeviceContext *pDeviceContext= nullptr;
if (!init_directx11(&pDevice, &pDeviceContext))
{
return FALSE;
}
D3D11_TEXTURE2D_DESC desc;
ZeroMemory(&desc, sizeof(D3D11_TEXTURE2D_DESC));
desc.ArraySize = 1;
desc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
desc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
desc.Format = DXGI_FORMAT_YUY2;
desc.MipLevels = 1;
desc.MiscFlags = 0;
desc.SampleDesc.Count = 1;
desc.SampleDesc.Quality = 0;
desc.Usage = D3D11_USAGE_DYNAMIC;
desc.Width = 1280;
desc.Height = 720;
ID3D11Texture2D* pTexture2D = nullptr;
HRESULT hr = pDevice->CreateTexture2D(&desc, NULL, &pTexture2D);
D3D11_MAPPED_SUBRESOURCE mappedResource;
ZeroMemory(&mappedResource, sizeof(DXGI_MAPPED_RECT));
hr = pDeviceContext->Map(pTexture2D, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource);
printf("RowPitch = %d\n", mappedResource.RowPitch);
pDeviceContext->Unmap(pTexture2D, 0);
pTexture2D->Release();
pDeviceContext->Release();
pDevice->Release();
getchar();
}
What am I doing wrong here?
This is not necessarily wrong. RowPitch depends on layout the hardware and driver assigned for the texture. The pitch might vary. You are supposed to read the pitch back once the resource is mapped, and use it respectively to read or write the data.
See this thread and message for pitch use code snippet:
The texture resource will have it's own pitch (the number of bytes in a row), which is probably different than the pitch of your source data. This pitch is given to you as the "RowPitch" member of D3D11_MAPPED_SUBRESOURCE. So typically you do something like this:
BYTE* mappedData = reinterpret_cast<BYTE*>(mappedResource.pData);
for(UINT i = 0; i < height; ++i)
{
memcpy(mappedData, buffer, rowspan);
mappedData += mappedResource.RowPitch;
buffer += rowspan;
}

CheckMultisampleQualityLevels(...) says the card does not support MSAA (which is not true for e.g. my GeForce GTX 780)?

I use CheckMultisampleQualityLevels(...) to establish the MSAA support on my hardware. I do it in that order:
D3D11CreateDevice(...) gives me device
device->CheckMultisampleQualityLevels(...)
Pass results to DXGI_SWAP_CHAIN_DESC.SampleDesc
CreateSwapChain(...) with given DXGI_SWAP_CHAIN_DESC
The problem is, CheckMultisampleQualityLevels(...) always gives me 0 for pNumQualityLevels. And I'm sure that my graphic card supports some MSAA (I've tested the program on GeForce gtx 780 and others with the same result).
Did I miss something? Should I call something else before CheckMultisampleQualityLevels(...)?
The code:
Create device:
UINT createDeviceFlags = 0;
#ifdef DEBUG_DIRECTX
createDeviceFlags |= D3D11_CREATE_DEVICE_DEBUG;
#endif
D3D_DRIVER_TYPE driverTypes[] = {
D3D_DRIVER_TYPE_HARDWARE,
D3D_DRIVER_TYPE_WARP,
D3D_DRIVER_TYPE_REFERENCE,
};
std::string driverTypesNames[] = {
"D3D_DRIVER_TYPE_HARDWARE",
"D3D_DRIVER_TYPE_WARP",
"D3D_DRIVER_TYPE_REFERENCE",
};
UINT numDriverTypes = ARRAYSIZE(driverTypes);
D3D_FEATURE_LEVEL featureLevels[] = {
D3D_FEATURE_LEVEL_11_0,
D3D_FEATURE_LEVEL_10_1,
D3D_FEATURE_LEVEL_10_0,
};
std::string featureLevelsNames[] = {
"D3D_FEATURE_LEVEL_11_0",
"D3D_FEATURE_LEVEL_10_1",
"D3D_FEATURE_LEVEL_10_0",
};
UINT numFeatureLevels = ARRAYSIZE(featureLevels);
D3D_FEATURE_LEVEL g_featureLevel = D3D_FEATURE_LEVEL_11_0;
for(UINT driverTypeIndex = 0; driverTypeIndex < numDriverTypes; driverTypeIndex++){
driverType = driverTypes[driverTypeIndex];
result = D3D11CreateDevice(NULL, driverType, NULL, createDeviceFlags, featureLevels, numFeatureLevels, D3D11_SDK_VERSION, &device, &g_featureLevel, &context);
if(SUCCEEDED(result)){
LOG(logDEBUG1, "Driver type: " << driverTypesNames[driverTypeIndex] << ".", MOD_GRAPHIC);
break;
}
}
ERROR_HANDLE(SUCCEEDED(result), L"Could not create device (DirectX 11).", MOD_GRAPHIC);
Check multi-sample quality levels (based on vertexwahn.de article):
sampleCountOut = 1;
maxQualityLevelOut = 0;
for(UINT sampleCount = 1; sampleCount <= D3D11_MAX_MULTISAMPLE_SAMPLE_COUNT; sampleCount++){
UINT maxQualityLevel = 0;
HRESULT hr = device->CheckMultisampleQualityLevels(DXGI_FORMAT_R8G8B8A8_UNORM, sampleCount, &maxQualityLevel);
if(maxQualityLevel > 0){
maxQualityLevel--;
}
ERROR_HANDLE(hr == S_OK, L"CheckMultisampleQualityLevels failed.", MOD_GRAPHIC);
if(maxQualityLevel > 0){
LOG(logDEBUG1, "MSAA " << sampleCount << "X supported with " << maxQualityLevel << " quality levels.", MOD_GRAPHIC);
sampleCountOut = sampleCount;
maxQualityLevelOut = maxQualityLevel;
}
}
Swap chain:
DXGI_SWAP_CHAIN_DESC sd;
ZeroMemory(&sd, sizeof(sd));
sd.BufferCount = 1;
sd.BufferDesc.Width = width;
sd.BufferDesc.Height = height;
sd.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
sd.BufferDesc.RefreshRate.Numerator = 60;
sd.BufferDesc.RefreshRate.Denominator = 1;
sd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
sd.OutputWindow = *hwnd;
sd.SampleDesc.Count = sampleCount;
sd.SampleDesc.Quality = maxQualityLevel;
sd.Windowed = false;
sd.Flags = DXGI_SWAP_CHAIN_FLAG_ALLOW_MODE_SWITCH; // allow full-screen switchin
//based on http://stackoverflow.com/questions/27270504/directx-creating-the-swapchain
IDXGIDevice * dxgiDevice = 0;
HRESULT hr = device->QueryInterface(__uuidof(IDXGIDevice), (void **)& dxgiDevice);
ERROR_HANDLE(SUCCEEDED(hr), L"Query for IDXGIDevice failed.", MOD_GRAPHIC);
IDXGIAdapter * dxgiAdapter = 0;
hr = dxgiDevice->GetParent(__uuidof(IDXGIAdapter), (void **)& dxgiAdapter);
ERROR_HANDLE(SUCCEEDED(hr), L"Could not get IDXGIAdapter.", MOD_GRAPHIC);
IDXGIFactory * dxgiFactory = 0;
hr = dxgiAdapter->GetParent(__uuidof(IDXGIFactory), (void **)& dxgiFactory);
ERROR_HANDLE(SUCCEEDED(hr), L"Could not get IDXGIFactory.", MOD_GRAPHIC);
// This system only has DirectX 11.0 installed (let's assume it)
result = dxgiFactory->CreateSwapChain(device, &sd, &swapChain);
LOG(logDEBUG1, "This system only has DirectX 11.0 installed. CreateSwapChain(...) used.", MOD_GRAPHIC);
ERROR_HANDLE(result == S_OK, L"Could not swap chain.", MOD_GRAPHIC);
My ERROR_HANDLE(...) macro never triggers (the first parameter is true in all cases). The log says I use D3D_DRIVER_TYPE_HARDWARE for driver type.
The DirectX Debuggers says (which is some problem, but I don't think it's the reason for CheckMultisampleQualityLevels(...) to gives me wrong results):
DXGI WARNING: IDXGISwapChain::Present: Fullscreen presentation inefficiencies incurred due to application not using IDXGISwapChain::ResizeBuffers appropriately, specifying a DXGI_MODE_DESC not available in IDXGIOutput::GetDisplayModeList, or not using DXGI_SWAP_CHAIN_FLAG_ALLOW_MODE_SWITCH.DXGI_SWAP_CHAIN_DESC::BufferDesc = { 1600, 900, { 60, 1 }, R8G8B8A8_UNORM, 0, 0 }; DXGI_SWAP_CHAIN_DESC::SampleDesc = { 8, 0 }; DXGI_SWAP_CHAIN_DESC::Flags = 0x2; [ MISCELLANEOUS WARNING #98: ]
Your code subtracts 1 from maxQualityLevels before checking to see whether it's greater than zero. An initial value of 1 would suggest it's valid to create the target at quality level 0.
Assuming you want this to work across vendors you only really need to check for it being > 0 and then just create the surface at Quality = 0.
Quality levels > 0 are vendor specific and can mean any number of things to different GPUs. Nvidia's CSAA and AMD's EQAA are both available through non-zero quality levels, but you'd need to look at their own documentation to figure out what each quality level actually means. They're also functionally slightly different to traditional MSAA. "Quality" is a little misleading in the sense that a greater number doesn't necessarily mean greater quality, it would be more appropriate to call it "Mode"
See both:
http://www.nvidia.com/object/coverage-sampled-aa.html
and
http://developer.amd.com/wordpress/media/2012/10/EQAA%2520Modes%2520for%2520AMD%2520HD%25206900%2520Series%2520Cards.pdf

Creating a 1 pixel Texture Issue.

I am trying to make a texture of 1 pixel, color is a variable passed to the function, and i have the following code:
unsigned char texArray[4];
texArray[0] = (unsigned char) color.x;
texArray[1] = (unsigned char) color.y;
texArray[2] = (unsigned char) color.z;
texArray[3] = (unsigned char) color.w;
ID3D11Texture2D *pTexture = nullptr;
ID3D11ShaderResourceView* pShaderResourceView;
D3D11_TEXTURE2D_DESC texDesc;
ZeroMemory(&texDesc, sizeof(D3D11_TEXTURE2D_DESC));
texDesc.ArraySize = 1;
texDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
texDesc.CPUAccessFlags = 0;
texDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
texDesc.MipLevels = 1;
texDesc.MiscFlags = 0;
texDesc.SampleDesc.Count = 1;
texDesc.SampleDesc.Quality = 0;
texDesc.Usage = D3D11_USAGE_DEFAULT;
texDesc.Height = 1;
texDesc.Width = 1;
D3D11_SUBRESOURCE_DATA texInitData;
ZeroMemory(&texInitData, sizeof(D3D11_SUBRESOURCE_DATA));
texInitData.pSysMem = texArray;
HRESULT hr;
hr = m_pDevice->CreateTexture2D(&texDesc, &texInitData, &pTexture);
hr = m_pDevice->CreateShaderResourceView(pTexture, NULL, &pShaderResourceView);
But it fails to create the texture2D (return nullptr) & hr contains "parameter is incorrect".
What is wrong/missing?
Since you are creating a 2D-texture, you will need to specify the SysMemPitch value in texInitData, since you are creating a 2D texture (even though it is just 1x1 pixel in this case). You should specify it to sizeof(unsigned char) * 4 in this case, since the next line would begin after that many bytes if there was another line.
It's a good practice to set the debug flag on when creating D3D device, in this case, you will get more information from Direct3D in Visual Studio's output window when running in debug mode.
UINT flags = D3D11_CREATE_DEVICE_BGRA_SUPPORT;
#if defined( DEBUG ) || defined( _DEBUG )
flags |= D3D11_CREATE_DEVICE_DEBUG;
#endif
HRESULT hr;
if (FAILED (hr = D3D11CreateDeviceAndSwapChain( NULL,
D3D_DRIVER_TYPE_HARDWARE,
NULL,
flags,
&FeatureLevelsRequested,
numLevelsRequested,
D3D11_SDK_VERSION,
&sd,
&g_pSwapChain,
&g_pd3dDevice,
&FeatureLevelsSupported,
&g_pImmediateContext )))
{
return hr;
}
and here is what I got from your code, you can easily know what's wrong from the message.
D3D11 ERROR: ID3D11Device::CreateTexture2D: pInitialData[0].SysMemPitch cannot be 0 [ STATE_CREATION ERROR #100: CREATETEXTURE2D_INVALIDINITIALDATA]
First-chance exception at 0x74891EE9 in Teapot.exe: Microsoft C++ exception: _com_error at memory location 0x00E4F668.
First-chance exception at 0x74891EE9 in Teapot.exe: Microsoft C++ exception: _com_error at memory location 0x00E4F668.
D3D11 ERROR: ID3D11Device::CreateTexture2D: Returning E_INVALIDARG, meaning invalid parameters were passed. [ STATE_CREATION ERROR #104: CREATETEXTURE2D_INVALIDARG_RETURN]

DirectX 11 IDXGISwapChain::GetBuffer failing with DXGI_ERROR_INVALID_CALL

I am creating a device and swap chain in DirectX11, then trying to get the texture of the back-buffer. The creation step appears to work but the GetBuffer call always fails with error DXGI_ERROR_INVALID_CALL (887a0001), regardless of what I do.
Here is the code for creating the device:
D3D_FEATURE_LEVEL featureLevels[] = { D3D_FEATURE_LEVEL_11_0, D3D_FEATURE_LEVEL_10_1, D3D_FEATURE_LEVEL_10_0, D3D_FEATURE_LEVEL_9_1 };
int numFeatureLevels = sizeof(featureLevels) / sizeof(featureLevels[0]);
DXGI_SWAP_CHAIN_DESC swapChainDesc;
ID3D11Device* pDevice;
IDXGISwapChain* pSwapChain;
D3D_FEATURE_LEVEL featureLevel;
ID3D11DeviceContext* pContext;
swapChainDesc.Windowed = TRUE;
swapChainDesc.OutputWindow = (HWND)(void*)pWindowHandle;
swapChainDesc.SwapEffect = DXGI_SWAP_EFFECT_SEQUENTIAL;
swapChainDesc.SampleDesc.Count = 1;
swapChainDesc.SampleDesc.Quality = 0;
swapChainDesc.BufferCount = 2;
swapChainDesc.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
swapChainDesc.BufferDesc.Width = 0;
swapChainDesc.BufferDesc.Height = 0;
swapChainDesc.BufferDesc.RefreshRate.Numerator = 1;
swapChainDesc.BufferDesc.RefreshRate.Denominator = 60;
swapChainDesc.BufferDesc.Scaling = DXGI_MODE_SCALING_UNSPECIFIED;
swapChainDesc.BufferDesc.ScanlineOrdering = DXGI_MODE_SCANLINE_ORDER_UNSPECIFIED;
swapChainDesc.BufferUsage = DXGI_USAGE_SHADER_INPUT|DXGI_USAGE_RENDER_TARGET_OUTPUT;
swapChainDesc.Flags = 0;
err = D3D11CreateDeviceAndSwapChain(NULL, D3D_DRIVER_TYPE_HARDWARE, NULL, D3D11_CREATE_DEVICE_DEBUG, featureLevels, numFeatureLevels, D3D11_SDK_VERSION,
&swapChainDesc, &pSwapChain, &pDevice, &featureLevel, &pContext);
if (!SUCCEEDED(err))
{
printf("D3D11CreateDeviceAndSwapChain failed with error %08x\n", err);
return false;
}
m_pDevice = pDevice;
m_pSwapChain = pSwapChain;
m_pDeviceContext = pContext;
m_featureLevel = featureLevel;
ID3D11Texture2D* pTex = NULL;
err = m_pSwapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), (void**)pTex);
if (!SUCCEEDED(err))
{
printf("GetBuffer failed with error %08x\n", err);
return false;
}
This is Managed C++ which is compiled into a DLL and run from a C# control's OnCreateControl method, which passes Handle into the function as pWindowHandle.
The create device call succeeds, giving me FEATURE_LEVEL_11_0, but the second printf function is always printing error 887a0001. Using the reference device does not help. I'm linking against d3dx11.lib, d3d11.lib, dxgi.lib, dxguid.lib, d3dcompiler.lib, d3d10.lib and d3dx10.lib.
I tried replacing the __uuidof with IID_ID3D11Texture2D and that made no difference.
I am using Visual Studio Express 2013 for Windows Desktop, on Windows 7, and the Microsoft DirectX SDK (June 2010). All x86 and x64, Debug and Release builds suffer from the same problem. My attempts to enable verbose debug output also fail; I have tried to force it on via the DirectX Properties in Control Panel, adding my program to the list of executables, but nothing extra is printed at runtime.
You've just passed the final argument incorrectly. Instead of this:
err = m_pSwapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), (void**)pTex);
You should be passing a pointer to the interface pointer, like this:
err = m_pSwapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), (void**)&pTex);
^
|