Convert IMFSample* to ID3D11ShaderResourceView* - c++

I am new to DirectX an I am trying to do a simple application that reads a video and display it on a Quad.
I read the video using Windows Media Foundation (IMFSourceReader), that sends me a callback when a sample is decoded (IMFSample).
I want to convert this IMFSample* to a ID3D11ShaderResourceView* in order to use it as a texture to draw my quad, however the conversion fails.
Here is what I do (I removed non relevant error checks):
HRESULT SourceReaderCB::OnReadSample(HRESULT hrStatus, DWORD dwStreamIndex, DWORD dwStreamFlags, LONGLONG llTimestamp, IMFSample *pSample)
{
...
DWORD NumBuffers = 0;
hr = pSample->GetBufferCount(&NumBuffers);
if (FAILED(hr) || NumBuffers < 1)
{
...
}
IMFMediaBuffer* SourceMediaPtr = nullptr;
hr = pSample->GetBufferByIndex(0, &SourceMediaPtr);
if (FAILED(hr))
{
...
}
ComPtr<IMFMediaBuffer> _pInputBuffer = SourceMediaPtr;
ComPtr<IMF2DBuffer2> _pInputBuffer2D2;
bool isVideoFrame = (_pInputBuffer.As(&_pInputBuffer2D2) == S_OK);
if (isVideoFrame)
{
IMFDXGIBuffer* pDXGIBuffer = NULL;
ID3D11Texture2D* pSurface = NULL;
hr = _pInputBuffer->QueryInterface(__uuidof(IMFDXGIBuffer), (LPVOID*)&pDXGIBuffer);
if (FAILED(hr))
{
SafeRelease(&SourceMediaPtr);
goto done;
}
hr = pDXGIBuffer->GetResource(__uuidof(ID3D11Texture2D), (LPVOID*)&pSurface);
if (FAILED(hr))
{
...
}
ID3D11ShaderResourceView* resourceView;
if (pSurface)
{
D3D11_TEXTURE2D_DESC textureDesc;
pSurface->GetDesc(&textureDesc);
D3D11_SHADER_RESOURCE_VIEW_DESC shaderResourceViewDesc;
shaderResourceViewDesc.Format = DXGI_FORMAT_R8_UNORM;
shaderResourceViewDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
shaderResourceViewDesc.Texture2D.MostDetailedMip = 0;
shaderResourceViewDesc.Texture2D.MipLevels = 1;
ID3D11ShaderResourceView* resourceView;
hr = d3d11device->CreateShaderResourceView(pSurface, &shaderResourceViewDesc, &resourceView);
if (FAILED(hr))
{
... // CODE FAILS HERE
}
...
}
}
}
My first issue is that I set the shaderResourceViewDesc.Format as DXGI_FORMAT_R8_UNORM which will probably just give me red image (I will have to investigate this later).
The second and blocking issue I am facing ius that the conversion of ID3D11Texture2D to ID3D11ShaderResourceView fails with following error message:
ID3D11Device::CreateShaderResourceView: A ShaderResourceView cannot be created of a Resource that did not specify the D3D11_BIND_SHADER_RESOURCE BindFlag. [ STATE_CREATION ERROR #129: CREATESHADERRESOURCEVIEW_INVALIDRESOURCE]
I understand that there is a flag missing at the creation of the texture that prevents me to do what I want to do, but as the data buffer is created by WMF, I am not sure what I am supposed to do to fix this issue.
Thanks for your help

I see you code, and I can say that your way is wrong - no offense. Firstly, video decoder creates simple texture - in you situation DirectX11 texture - it is a regular texture - it is not shader resource, as a result it cannot be used in shader code. In my view, there are two way for resolving of your task:
Research - Walkthrough: Using MF to render video in a Direct3D app - this link present way for "Walkthrough: Using Microsoft Media Foundation for Windows Phone 8" - from your code I see that you try write solution for WindowsStore - UWP and code for Windows Phone is workable - this code needs MediaEnginePlayer - The MediaEnginePlayer class serves as a helper class that wraps the MF APIs;
Find on GitHub Windows-classic-samples and find in that DX11VideoRenderer - this is full code of Media Foundation renderer with DirectX11 - it includes very good example for using of DirectX11 Video Processor which does blitting of regular video texture from decoder into the rendering video texture of swap-chain:
2.1. Get rendering texture from Swap Chain:
// Get Backbuffer
hr = m_pSwapChain1->GetBuffer(0, __uuidof(ID3D11Texture2D), (void**)&pDXGIBackBuffer);
if (FAILED(hr))
{
break;
}
2.2. Create from rendering texture output view of video processor:
//
// Create Output View of Output Surfaces.
//
D3D11_VIDEO_PROCESSOR_OUTPUT_VIEW_DESC OutputViewDesc;
ZeroMemory( &OutputViewDesc, sizeof( OutputViewDesc ) );
if (m_b3DVideo && m_bStereoEnabled)
{
OutputViewDesc.ViewDimension = D3D11_VPOV_DIMENSION_TEXTURE2DARRAY;
}
else
{
OutputViewDesc.ViewDimension = D3D11_VPOV_DIMENSION_TEXTURE2D;
}
OutputViewDesc.Texture2D.MipSlice = 0;
OutputViewDesc.Texture2DArray.MipSlice = 0;
OutputViewDesc.Texture2DArray.FirstArraySlice = 0;
if (m_b3DVideo && 0 != m_vp3DOutput)
{
OutputViewDesc.Texture2DArray.ArraySize = 2; // STEREO
}
QueryPerformanceCounter(&lpcStart);
hr = m_pDX11VideoDevice->CreateVideoProcessorOutputView(pDXGIBackBuffer, m_pVideoProcessorEnum, &OutputViewDesc, &pOutputView);
2.3. Create from regular decoder video texture input view for video processor:
D3D11_VIDEO_PROCESSOR_INPUT_VIEW_DESC InputLeftViewDesc;
ZeroMemory( &InputLeftViewDesc, sizeof( InputLeftViewDesc ) );
InputLeftViewDesc.FourCC = 0;
InputLeftViewDesc.ViewDimension = D3D11_VPIV_DIMENSION_TEXTURE2D;
InputLeftViewDesc.Texture2D.MipSlice = 0;
InputLeftViewDesc.Texture2D.ArraySlice = dwLeftViewIndex;
hr = m_pDX11VideoDevice->CreateVideoProcessorInputView(pLeftTexture2D, m_pVideoProcessorEnum, &InputLeftViewDesc, &pLeftInputView);
if (FAILED(hr))
{
break;
}
2.4. Do blitting of regular decoder video texture on rendering texture from Swap Chain:
D3D11_VIDEO_PROCESSOR_STREAM StreamData;
ZeroMemory( &StreamData, sizeof( StreamData ) );
StreamData.Enable = TRUE;
StreamData.OutputIndex = 0;
StreamData.InputFrameOrField = 0;
StreamData.PastFrames = 0;
StreamData.FutureFrames = 0;
StreamData.ppPastSurfaces = NULL;
StreamData.ppFutureSurfaces = NULL;
StreamData.pInputSurface = pLeftInputView;
StreamData.ppPastSurfacesRight = NULL;
StreamData.ppFutureSurfacesRight = NULL;
if (m_b3DVideo && MFVideo3DSampleFormat_MultiView == m_vp3DOutput && pRightTexture2D)
{
StreamData.pInputSurfaceRight = pRightInputView;
}
hr = pVideoContext->VideoProcessorBlt(m_pVideoProcessor, pOutputView, 0, 1, &StreamData );
if (FAILED(hr))
{
break;
}
Yes, they are sections of complex code, and it needs research whole DX11VideoRenderer project for understanding of it - it will take huge amount of time.
Regards,
Evgeny Pereguda

Debug output suggests that the texture is not compatible, as it was created without D3D11_BIND_SHADER_RESOURCE flag (specified in BindFlag field of D3D11_TEXTURE2D_DESC structure.
You read the texture already created by Media Foundation primitive. In some cases you can alter the creation flags, however the general case is that you need to create a compatible texture on your own, copy the data between the textures, and then call CreateShaderResourceView method with your texture as an argument rather than original texture.

Related

DesktopDuplication API produces black frames while certain applications are in fullscreen mode

I'm building an application that is used for taking and sharing screenshots in real time between multiple clients over network.
I'm using the MS Desktop Duplication API to get the image data and it's working smoothly except in some edge cases.
I have been using four games as test applications in order to test how the screencapture behaves in fullscreen and they are Heroes of the Storm, Rainbow Six Siege, Counter Strike and PlayerUnknown's Battlegrounds.
On my own machine which has a GeForce GTX 1070 graphics card; everything works fine both in and out of fullscreen for all test applications. On two other machines that runs a GeForce GTX 980 however; all test applications except PUBG works. When PUBG is running in fullscreen, my desktop duplication instead produces an all black image and I can't figure out why as the
Desktop Duplication Sample works fine for all test machines and test applications.
What I'm doing is basically the same as the sample except I'm extracting the pixel data and creating my own SDL(OpenGL) texture from that data instead of using the acquired ID3D11Texture2D directly.
Why is PUBG in fullscreen on GTX 980 the only test case that fails?
Is there something wrong with the way I'm getting the frame, handling the "DXGI_ERROR_ACCESS_LOST" error or how I'm copying the data from the GPU?
Declarations:
IDXGIOutputDuplication* m_OutputDup = nullptr;
Microsoft::WRL::ComPtr<ID3D11Device> m_Device = nullptr;
ID3D11DeviceContext* m_DeviceContext = nullptr;
D3D11_TEXTURE2D_DESC m_TextureDesc;
Initialization:
bool InitializeScreenCapture()
{
HRESULT result = E_FAIL;
if (!m_Device)
{
D3D_FEATURE_LEVEL featureLevels = D3D_FEATURE_LEVEL_11_0;
D3D_FEATURE_LEVEL featureLevel;
result = D3D11CreateDevice(
nullptr,
D3D_DRIVER_TYPE_HARDWARE,
nullptr,
0,
&featureLevels,
1,
D3D11_SDK_VERSION,
&m_Device,
&featureLevel,
&m_DeviceContext);
if (FAILED(result) || !m_Device)
{
Log("Failed to create D3DDevice);
return false;
}
}
// Get DXGI device
ComPtr<IDXGIDevice> DxgiDevice;
result = m_Device.As(&DxgiDevice);
if (FAILED(result))
{
Log("Failed to get DXGI device);
return false;
}
// Get DXGI adapter
ComPtr<IDXGIAdapter> DxgiAdapter;
result = DxgiDevice->GetParent(__uuidof(IDXGIAdapter), &DxgiAdapter);
if (FAILED(result))
{
Log("Failed to get DXGI adapter);
return false;
}
DxgiDevice.Reset();
// Get output
UINT Output = 0;
ComPtr<IDXGIOutput> DxgiOutput;
result = DxgiAdapter->EnumOutputs(Output, &DxgiOutput);
if (FAILED(result))
{
Log("Failed to get DXGI output);
return false;
}
DxgiAdapter.Reset();
ComPtr<IDXGIOutput1> DxgiOutput1;
result = DxgiOutput.As(&DxgiOutput1);
if (FAILED(result))
{
Log("Failed to get DXGI output1);
return false;
}
DxgiOutput.Reset();
// Create desktop duplication
result = DxgiOutput1->DuplicateOutput(m_Device.Get(), &m_OutputDup);
if (FAILED(result))
{
Log("Failed to create output duplication);
return false;
}
DxgiOutput1.Reset();
DXGI_OUTDUPL_DESC outputDupDesc;
m_OutputDup->GetDesc(&outputDupDesc);
// Create CPU access texture description
m_TextureDesc.Width = outputDupDesc.ModeDesc.Width;
m_TextureDesc.Height = outputDupDesc.ModeDesc.Height;
m_TextureDesc.Format = outputDupDesc.ModeDesc.Format;
m_TextureDesc.ArraySize = 1;
m_TextureDesc.BindFlags = 0;
m_TextureDesc.MiscFlags = 0;
m_TextureDesc.SampleDesc.Count = 1;
m_TextureDesc.SampleDesc.Quality = 0;
m_TextureDesc.MipLevels = 1;
m_TextureDesc.CPUAccessFlags = D3D11_CPU_ACCESS_FLAG::D3D11_CPU_ACCESS_READ;
m_TextureDesc.Usage = D3D11_USAGE::D3D11_USAGE_STAGING;
return true;
}
Screen capture:
void TeamSystem::CaptureScreen()
{
if (!m_ScreenCaptureInitialized)
{
Log("Attempted to capture screen without ScreenCapture being initialized");
return false;
}
HRESULT result = E_FAIL;
DXGI_OUTDUPL_FRAME_INFO frameInfo;
ComPtr<IDXGIResource> desktopResource = nullptr;
ID3D11Texture2D* copyTexture = nullptr;
ComPtr<ID3D11Resource> image;
int32_t attemptCounter = 0;
DWORD startTicks = GetTickCount();
do // Loop until we get a non empty frame
{
m_OutputDup->ReleaseFrame();
result = m_OutputDup->AcquireNextFrame(1000, &frameInfo, &desktopResource);
if (FAILED(result))
{
if (result == DXGI_ERROR_ACCESS_LOST) // Access may be lost when changing from/to fullscreen mode(any application); when this happens we need to reaquirce the outputdup
{
m_OutputDup->ReleaseFrame();
m_OutputDup->Release();
m_OutputDup = nullptr;
m_ScreenCaptureInitialized = InitializeScreenCapture();
if (m_ScreenCaptureInitialized)
{
result = m_OutputDup->AcquireNextFrame(1000, &frameInfo, &desktopResource);
}
else
{
Log("Failed to reinitialize screen capture after access was lost");
return false;
}
}
if (FAILED(result))
{
Log("Failed to acquire next frame);
return false;
}
}
attemptCounter++;
if (GetTickCount() - startTicks > 3000)
{
Log("Screencapture timed out after " << attemptCounter << " attempts");
return false;
}
} while(frameInfo.TotalMetadataBufferSize <= 0 || frameInfo.LastPresentTime.QuadPart <= 0); // This is how you wait for an image containing image data according to SO (https://stackoverflow.com/questions/49481467/acquirenextframe-not-working-desktop-duplication-api-d3d11)
Log("ScreenCapture succeeded after " << attemptCounter << " attempt(s)");
// Query for IDXGIResource interface
result = desktopResource->QueryInterface(__uuidof(ID3D11Texture2D), reinterpret_cast<void**>(&copyTexture));
desktopResource->Release();
desktopResource = nullptr;
if (FAILED(result))
{
Log("Failed to acquire texture from resource);
m_OutputDup->ReleaseFrame();
return false;
}
// Copy image into a CPU access texture
ID3D11Texture2D* stagingTexture = nullptr;
result = m_Device->CreateTexture2D(&m_TextureDesc, nullptr, &stagingTexture);
if (FAILED(result) || stagingTexture == nullptr)
{
Log("Failed to copy image data to access texture);
m_OutputDup->ReleaseFrame();
return false;
}
D3D11_MAPPED_SUBRESOURCE mappedResource;
m_DeviceContext->CopyResource(stagingTexture, copyTexture);
m_DeviceContext->Map(stagingTexture, 0, D3D11_MAP_READ, 0, &mappedResource);
void* copy = malloc(m_TextureDesc.Width * m_TextureDesc.Height * 4);
memcpy(copy, mappedResource.pData, m_TextureDesc.Width * m_TextureDesc.Height * 4);
m_DeviceContext->Unmap(stagingTexture, 0);
stagingTexture->Release();
m_OutputDup->ReleaseFrame();
// Create a new SDL texture from the data in the copy varialbe
free(copy);
return true;
}
Some notes:
I have modified my original code to make it more readable so some cleanup and logging in the error handling is missing.
None of the error or timeout cases(except DXGI_ERROR_ACCESS_LOST) trigger in any testing scenario.
The "attemptCounter" never goes above 2 in any testing scenario.
The test cases are limited since I don't have access to a computer which produces the black image case.
The culprit was CopyResource() and how I created the CPU access texture.
CopyResource() returns void and that is why I didn't look into it before; I didn't think it could fail in any significant way since I expected it to return bool or HRESULT if that was the case.
In the documentation of CopyResource() does however disclose a couple of fail cases.
This method is unusual in that it causes the GPU to perform the copy operation (similar to a memcpy by the CPU). As a result, it has a few restrictions designed for improving performance. For instance, the source and destination resources:
Must be different resources.
Must be the same type.
Must have identical dimensions (including width, height, depth, and size as appropriate).
Must have compatible DXGI formats, which means the formats must be identical or at least from the same type group.
Can't be currently mapped.
Since the initialization code runs before the test application enters fullscreen, the CPU access texture description is set up using the desktop resolution, format etc. This caused CopyResouce() to fail silently and simply now write anything to stagingTexture in the test cases where a non native resoltuion was used for the test application.
In conclusion; I just moved the m_TextureDescription setup to CaptureScreen() and used the description of copyTexture to get the variables I didn't want to change between the textures.
// Create CPU access texture
D3D11_TEXTURE2D_DESC copyTextureDesc;
copyTexture->GetDesc(&copyTextureDesc);
D3D11_TEXTURE2D_DESC textureDesc;
textureDesc.Width = copyTextureDesc.Width;
textureDesc.Height = copyTextureDesc.Height;
textureDesc.Format = copyTextureDesc.Format;
textureDesc.ArraySize = copyTextureDesc.ArraySize;
textureDesc.BindFlags = 0;
textureDesc.MiscFlags = 0;
textureDesc.SampleDesc = copyTextureDesc.SampleDesc;
textureDesc.MipLevels = copyTextureDesc.MipLevels;
textureDesc.CPUAccessFlags = D3D11_CPU_ACCESS_FLAG::D3D11_CPU_ACCESS_READ;
textureDesc.Usage = D3D11_USAGE::D3D11_USAGE_STAGING;
ID3D11Texture2D* stagingTexture = nullptr;
result = m_Device->CreateTexture2D(&textureDesc, nullptr, &stagingTexture);
While this solved the issues I was having; I still don't know why the reinitialization in the handling of DXGI_ERROR_ACCESS_LOST didn't resolve the issue anyway. Does the DesctopDuplicationDescription not use the same dimensions and format as the copyTexture?
I also don't know why this didn't fail in the same way on computers with newer graphics cards. I did however notice that these machines were able to capture fullscreen applications using a simple BitBlt() of the desktop surface.

directX 11 scale texture2D

I would like to scale a texture I created from AcquireNextFrame.
Here is my code :
if (gRealTexture == nullptr) {
D3D11_TEXTURE2D_DESC description;
texture2D->GetDesc(&description);
description.BindFlags = 0;
description.CPUAccessFlags = D3D11_CPU_ACCESS_READ | D3D11_CPU_ACCESS_WRITE;
description.Usage = D3D11_USAGE_STAGING;
description.MiscFlags = 0;
hr = gDevice->CreateTexture2D(&description, NULL, &gRealTexture);
if (FAILED(hr)) {
if (gRealTexture) {
gRealTexture->Release();
gRealTexture = nullptr;
}
return NULL;
}
}
gImmediateContext->CopyResource(gRealTexture, texture2D);
D3D11_MAPPED_SUBRESOURCE mapped;
hr = gImmediateContext->Map(gRealTexture, 0, D3D11_MAP_READ_WRITE, 0, &mapped);
if (FAILED(hr)) {
gRealTexture->Release();
gRealTexture = NULL;
return NULL;
}
unsigned char *source = static_cast<unsigned char *>(mapped.pData); //Here I get the pixel buffer data
Problem is that it is in Full HD Resolution (1920x1080). I would like to decrease the resolution (1280x720 for example) because I need to send source over network. And I don't really need a Full HD image.
Is it possible to do it with DirectX easily before I get the pixel buffer ?
Thanks
Create a smaller resolution texture, render the texture2d to the smaller texture using a fullscreen quad (but shrinking to the new size). Make sure bilinear filtering is on.
Then copy and map the smaller texture. CopyResource needs the same dimensions.
Some resources:
DX11 Helper Library
Render To Texture
DX11 Post Processing blur
The blur setup is the same as what I'm talking about, only instead of using a blur shader you use a simple texture shader.

Unable to read depth buffer from Compute shader

I am unable to read depth buffer from compute shader.
I am using this in my hlsl code.
Texture2D<float4> gDepthTextures : register(t3);
// tried this.
//Texture2D<float> gDepthTextures : register(t3);
// and this.
//Texture2D<uint> gDepthTextures : register(t3);
// and this.
//Texture2D<uint4> gDepthTextures : register(t3);
And doing this to read the buffer.
outputTexture[dispatchThreadId.xy]=gDepthTextures.Load(int3(dispatchThreadId.xy,0));
And I am detaching depth buffer from render target.
ID3D11RenderTargetView *nullView[3]={NULL,NULL,NULL};
g_pImmediateContext->OMSetRenderTargets( 3, nullView, NULL );
Still I am getting this error in output.
*D3D11 ERROR: ID3D11DeviceContext::Dispatch: The Shader Resource View dimension declared in the shader code (TEXTURE2D) does not match the view type bound to slot 3 of the Compute Shader unit (BUFFER). This mismatch is invalid if the shader actually uses the view (e.g. it is not skipped due to shader code branching). [ EXECUTION ERROR #354: DEVICE_DRAW_VIEW_DIMENSION_MISMATCH]*
This is how I am creating shader resource view.
// Create depth stencil texture
D3D11_TEXTURE2D_DESC descDepth;
ZeroMemory( &descDepth, sizeof(descDepth) );
descDepth.Width = width;
descDepth.Height = height;
descDepth.MipLevels = 1;
descDepth.ArraySize = 1;
descDepth.Format = DXGI_FORMAT_R32_TYPELESS;
descDepth.SampleDesc.Count = 1;
descDepth.SampleDesc.Quality = 0;
descDepth.Usage = D3D11_USAGE_DEFAULT;
descDepth.BindFlags = D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE;
descDepth.CPUAccessFlags = 0;
descDepth.MiscFlags = 0;
hr = g_pd3dDevice->CreateTexture2D( &descDepth, NULL, &g_pDepthStencil );
if( FAILED( hr ) )
return hr;
// Create the depth stencil view
D3D11_DEPTH_STENCIL_VIEW_DESC descDSV;
ZeroMemory( &descDSV, sizeof(descDSV) );
descDSV.Format = DXGI_FORMAT_D32_FLOAT;
descDSV.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2D;
descDSV.Texture2D.MipSlice = 0;
hr = g_pd3dDevice->CreateDepthStencilView( g_pDepthStencil, &descDSV, &g_pDepthStencilView );
if( FAILED( hr ) )
return hr;
// Create depth shader resource view.
D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc;
ZeroMemory(&srvDesc,sizeof(D3D11_SHADER_RESOURCE_VIEW_DESC));
srvDesc.Format=DXGI_FORMAT_R32_UINT;
srvDesc.ViewDimension=D3D11_SRV_DIMENSION_TEXTURE2D;
srvDesc.Texture2D.MostDetailedMip=0;
srvDesc.Texture2D.MipLevels=1;
hr=g_pd3dDevice->CreateShaderResourceView(g_pDepthStencil,&srvDesc,&g_pDepthSRV);
if(FAILED(hr))
return hr;
I have tried all the formats mentioned here in combination with the hlsl texture formats float, float4, uint, uint4 with no success. Any idea?
Replace DXGI_FORMAT_R32_UINT by DXGI_FORMAT_R32_FLOAT for your shader resource view, since you use R32_Typeless, you have a floating point buffer.
Texture2D gDepthTextures will be the one you need to load or sample the depth later.
Also it looks like that your texture is not bound properly to your compute shader (since runtime tells you you have a buffer bound in there).
Make sure you have:
immediateContext->CSSetShaderResources(3,1,g_pDepthSRV);
Called before your dispatch.
As a side note, to debug those type of issues, you can also call CSGetShaderResources (and other equivalent), in order to check what's bound in your pipeline before your call.

C++ Direct2D ID2D1HwndRenderTarget's parent IUnknown class' _vfptr variable becomes null

I have a Direct2D app that I am making, and I am writing a Direct2D library that makes using Direct2D easier for me as well. I'll post the exact problematic code if I need to, but my main issue is that I have an ID2D1HwndRenderTarget in the definition of one class, I extend that class with another class, in the child class I have a method that calls a method of the parent class that initializes the Render Target, and then in turn calls the load method of the child class. However, as soon as the program reaches the load content method of the child class, the __vfptr variable (I don't have a clue what that is) in the IUnknown portion of the ID2D1HwndRenderTarget is now null. The only reason I figured this out is in some other code I was getting an access violation error when using the render target to create a ID2D1Bitmap from an IWicBitmapSource. I don't understand how this happens because after initializing the Render Target, that _vfptr variable becomes null as soon as the method with the initialization code returns. Can anyone explain why this may be happening? My relevant code is below.
This code is called once to create the hwnd render target and the offscreen render target. This is in a dll project.
GameBase.cpp
HRESULT GameBase::Initialize(HINSTANCE hInst, HWND winHandle, struct DX2DInitOptions options)
{
this->mainRenderTarget = NULL;
this->offscreenRendTarget = NULL;
this->factory = NULL;
HRESULT result;
D2D1_FACTORY_OPTIONS factOptions;
D2D1_FACTORY_TYPE factType;
if(options.enableDebugging)
factOptions.debugLevel = D2D1_DEBUG_LEVEL::D2D1_DEBUG_LEVEL_ERROR;
else
factOptions.debugLevel = D2D1_DEBUG_LEVEL::D2D1_DEBUG_LEVEL_NONE;
if(options.singleThreadedApp)
factType = D2D1_FACTORY_TYPE_SINGLE_THREADED;
else
factType = D2D1_FACTORY_TYPE_MULTI_THREADED;
result = D2D1CreateFactory(factType, factOptions, &this->factory);
if(FAILED(result))
{
OutputDebugString(L"Failed to create a Direct 2D Factory!");
return result;
}
this->instance = hInst;
this->hwnd = winHandle;
D2D1_SIZE_U size = D2D1::SizeU(options.winWidth, options.winHeight);
this->width = options.winWidth;
this->height = options.winHeight;
result = factory->CreateHwndRenderTarget(D2D1::RenderTargetProperties(), D2D1::HwndRenderTargetProperties(winHandle, size), &this->mainRenderTarget);
if(FAILED(result))
{
OutputDebugString(L"Failed to create a render target to draw to the window with!");
return result;
}
result = this->mainRenderTarget->CreateCompatibleRenderTarget(&this->offscreenRendTarget);
if(FAILED(result))
{
OutputDebugString(L"Failed to create an offscreen render target from the main render target.");
return result;
}
return LoadContent();
}
After the call to LoadContent, at no point in time do I change the value of the mainRenderTarget.
DX2DImage.cpp
HRESULT DX2DImageLoader::LoadFromResource(LPCWSTR resourceName, LPCWSTR resourceType, HMODULE progModule, DX2DImage* image)
{
if(!this->isInit)
{
OutputDebugStringA("You must call InitializeImageLoader before using this image loader!");
return E_FAIL;
}
IWICBitmapDecoder *decoder = NULL;
IWICBitmapFrameDecode *source = NULL;
IWICStream *stream = NULL;
IWICFormatConverter *converter = NULL;
HRSRC imageResHandle = NULL;
HGLOBAL imageResDataHandle = NULL;
void *imageFile = NULL;
DWORD imageFileSize = 0;
HRESULT result;
//Find the image.
imageResHandle = FindResource(progModule, resourceName, resourceType);
if(!imageResHandle)
{
OutputDebugStringA("Failed to get a handle to the resource!");
return E_FAIL;
}
//Load the data handle of the image.
imageResDataHandle = LoadResource(progModule, imageResHandle);
if(!imageResDataHandle)
{
OutputDebugStringA("Failed to load the image from the module!");
return E_FAIL;
}
//Lock and retrieve the image.
imageFile = LockResource(imageResDataHandle);
if(!imageFile)
{
OutputDebugStringA("Failed to lock the image in the module!");
return E_FAIL;
}
//Get the size of the image.
imageFileSize = SizeofResource(progModule, imageResHandle);
if(!imageFileSize)
{
OutputDebugStringA("Failed to retrieve the size of the image in the module!");
return E_FAIL;
}
//Create a stream that will read the image data.
result = this->factory->CreateStream(&stream);
if(FAILED(result))
{
OutputDebugStringA("Failed to create an IWICStream!");
return result;
}
//Open a stream to the image.
result = stream->InitializeFromMemory(reinterpret_cast<BYTE*>(imageFile), imageFileSize);
if(FAILED(result))
{
OutputDebugStringA("Failed to initialize the stream!");
return result;
}
//Create a decoder from the stream
result = this->factory->CreateDecoderFromStream(stream, NULL, WICDecodeMetadataCacheOnDemand, &decoder);
if(FAILED(result))
{
OutputDebugStringA("Failed to create a decoder from the stream!");
return result;
}
//Get the first frame from the image.
result = decoder->GetFrame(0, &source);
if(FAILED(result))
{
OutputDebugStringA("Failed to get the first frame from the decoder!");
return result;
}
//Create a format converter to convert image to 32bppPBGRA
result = this->factory->CreateFormatConverter(&converter);
if(FAILED(result))
{
OutputDebugStringA("Failed to create a format converter!");
return result;
}
//Convert the image to the new format.
result = converter->Initialize(source, GUID_WICPixelFormat32bppPBGRA, WICBitmapDitherTypeNone, NULL, 0.0f, WICBitmapPaletteTypeMedianCut);
if(FAILED(result))
{
OutputDebugStringA("Failed to convert the image to the correct format!");
return result;
}
//Create the Direct2D Bitmap from the Wic Bitmap.
result = this->renderTarget->CreateBitmapFromWicBitmap(converter, NULL, &image->bitmap);
if(FAILED(result))
{
OutputDebugStringA("Failed to create a Direct 2D Bitmap from a WIC Bitmap!");
return result;
}
image->width = static_cast<UINT>(image->bitmap->GetSize().width);
image->height = static_cast<UINT>(image->bitmap->GetSize().height);
SafeRelease(&source);
SafeRelease(&converter);
SafeRelease(&decoder);
SafeRelease(&stream);
return S_OK;
}
The Access Violation exception occurs on the line
result = this->renderTarget->CreateBitmapFromWicBitmap(converter, NULL, &image->bitmap);
where image->bitmap is a currently NULL (Like it's supposed to be) ID2D1Bitmap.
Here, the renderTarget variable is the same mainRenderTarget variable from GameBase.cpp above. When I debug the line, all the parents of the RenderTarget are not null, however once I get to the IUnknown interface under it all, the _vfptr thing is null. This is not the case with the converter variable, this variable, or image variable.
I don't have enough code to debug your code, but from what I see I suspect the call to converter->Initialize(...) be invalid since MSDN say:
If you do not have a predefined palette, you must first create one. Use
InitializeFromBitmap to create the palette object, then pass it in along
with your other parameters.
dither, pIPalette, alphaThresholdPercent, and paletteTranslate are used to
mitigate color loss when converting to a reduced bit-depth format. For
conversions that do not need these settings, the following parameters values
should be used: dither set to WICBitmapDitherTypeNone, pIPalette set to NULL,
alphaThresholdPercent set to 0.0f, and paletteTranslate set to
WICBitmapPaletteTypeCustom.
And in your code you do not provide a valid pallete(you used NULL) and your paletteTranslate is not WICBitmapPaletteTypeCustom

DirectDraw question - running the application as a regular Windows application

I am developing an application for video recording and I want to overlay the video preview with a logo and recording timer.
I tried to run the full-screen application and everything worked fine. Then I tried to run the application as a regular Windows application and it returned an error.
Could anyone take a look at the code below if there's a way to modify it to run the application as a regular Windows app?
HRESULT CViewfinderRenderer::OnStartStreaming()
{
HRESULT hr = S_OK;
DDSURFACEDESC ddsd;
m_pDD = NULL;
//full screen settings
hr = DirectDrawCreate(NULL, &m_pDD, NULL);
hr = m_pDD->SetCooperativeLevel(m_hWnd, DDSCL_FULLSCREEN);
ddsd.dwSize = sizeof(ddsd);
ddsd.dwFlags = DDSD_CAPS | DDSD_BACKBUFFERCOUNT;
ddsd.ddsCaps.dwCaps = DDSCAPS_FLIP | DDSCAPS_PRIMARYSURFACE;
ddsd.dwBackBufferCount = 1;
//end full screen settings
//normal settings
/*hr = DirectDrawCreate(NULL, &m_pDD, NULL);
hr = m_pDD->SetCooperativeLevel(m_hWnd, DDSCL_NORMAL);
ddsd.dwSize = sizeof(ddsd);
ddsd.dwFlags = DDSD_BACKBUFFERCOUNT;
ddsd.dwBackBufferCount = 1;*/
//end normal settings
hr = m_pDD->CreateSurface(&ddsd, &m_pSurface, NULL);
if (hr != DD_OK) {
return hr;
}
// Get backsurface
hr = m_pSurface->EnumAttachedSurfaces(&m_pBackSurface, EnumFunction);
return S_OK;
}
Even when running windowed, you need to create a primary surface, only it is not a flippable surface.
//full screen settings
hr = DirectDrawCreate(NULL, &m_pDD, NULL);
hr = m_pDD->SetCooperativeLevel(m_hWnd, DDSCL_NORMAL);
ddsd.dwSize = sizeof(ddsd);
ddsd.dwFlags = DDSD_CAPS;
ddsd.ddsCaps.dwCaps = DDSCAPS_PRIMARYSURFACE;
Besides of creating a surface, most likely you will want to create a clipper for the window. For a complete sample see paragraph Running windowed in this GameDev article.
What error did it return?
Also try this instead:
ddsd.dwFlags = DDSD_CAPS;
ddsd.ddsCaps.dwCaps = DDSCAPS_PRIMARYSURFACE;