Convert from Direct3D11CaptureFrame to OpenCV's cv::Mat - c++

I'm trying to convert from a Direct3D11CaptureFrame, obtained as in the example from Microsoft's Screen Capture documentation (and verified to be correct by saving it on disk), to a cv::Mat. I am converting the frame's surface into a ID3D11Texture2D in the following way, and it seems to work:
void processFrame(const winrt::Windows::Graphics::Capture::Direct3D11CaptureFrame &frame) {
winrt::com_ptr<Windows::Graphics::DirectX::Direct3D11::IDirect3DDxgiInterfaceAccess> access =
frame.Surface().as<Windows::Graphics::DirectX::Direct3D11::IDirect3DDxgiInterfaceAccess>();
winrt::com_ptr<ID3D11Texture2D> frameSurface;
winrt::check_hresult(access->GetInterface(winrt::guid_of<ID3D11Texture2D>(), frameSurface.put_void()));
(...)
}
From this point, looking at opencv's d3d interop samples, I see I have two options. The first one consists in using cv::directx::convertFromD3D11Texture2D such as
void processFrame(const winrt::Windows::Graphics::Capture::Direct3D11CaptureFrame &frame) {
winrt::com_ptr<Windows::Graphics::DirectX::Direct3D11::IDirect3DDxgiInterfaceAccess> access =
frame.Surface().as<Windows::Graphics::DirectX::Direct3D11::IDirect3DDxgiInterfaceAccess>();
winrt::com_ptr<ID3D11Texture2D> frameSurface;
winrt::check_hresult(access->GetInterface(winrt::guid_of<ID3D11Texture2D>(), frameSurface.put_void()));
cv::Mat img;
// throws exception in the following line
//
cv::directx::convertFromD3D11Texture2D(frameSurface.get(), img);
}
But it throws an exception after calling the destructor of the input array (frameSurface.get()). This method is used for GPU work in the code examples, but I haven't seen it stated as exclusively for GPU in the documentation.
The second option goes through a couple more hoops:
void processFrame(const winrt::Windows::Graphics::Capture::Direct3D11CaptureFrame &frame, int height, int width) {
auto access =
frame.Surface().as<Windows::Graphics::DirectX::Direct3D11::IDirect3DDxgiInterfaceAccess>();
winrt::com_ptr<ID3D11Texture2D> frameSurface;
winrt::check_hresult(
access->GetInterface(winrt::guid_of<ID3D11Texture2D>(), frameSurface.put_void()));
ID3D11Device *deviceD3D;
frameSurface->GetDevice(&deviceD3D);
ID3D11DeviceContext *m_pD3D11Ctx;
deviceD3D->GetImmediateContext(&m_pD3D11Ctx);
auto subResource = ::D3D11CalcSubresource(0, 0, 1);
D3D11_MAPPED_SUBRESOURCE mappedTex;
auto r = m_pD3D11Ctx->Map(frameSurface.get(), subResource, D3D11_MAP_WRITE_DISCARD, 0, &mappedTex);
// FAILS here
//
if (FAILED(r)) {
throw std::runtime_error("surface mapping failed!");
}
cv::Mat m(height, width, CV_8UC4, mappedTex.pData, mappedTex.RowPitch);
}
... but it fails when mapping the context to the subresource.
I have been tweaking everything that I could think of given my inexperience with DirectX and trying to see what I am missing, but I can't find a solution for any of the two options. Does anyone see my mistakes or has a better solution for my problem?

1.you need new D3D11_TEXTURE2D_DESC and set: CPUAccessFlags = D3D11_CPU_ACCESS_READ
2.use that D3D11_TEXTURE2D_DESC to create a new CreateTexture2D
3.copy frameSurface to your new CreateTexture2D
void SimpleCapture::OnFrameArrived(
Direct3D11CaptureFramePool const& sender,
winrt::Windows::Foundation::IInspectable const&) {
auto newSize = false;
auto frame = sender.TryGetNextFrame();
auto frameContentSize = frame.ContentSize();
D3D11_TEXTURE2D_DESC desc;
desc.MipLevels = 1;
desc.ArraySize = 1;
desc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
desc.SampleDesc = { 1,0 };
desc.Usage = D3D11_USAGE_STAGING;
desc.BindFlags = 0;
desc.CPUAccessFlags = D3D11_CPU_ACCESS_READ;
desc.MiscFlags = 0;
if (frameContentSize.Width != m_lastSize.Width ||
frameContentSize.Height != m_lastSize.Height) {
newSize = true;
m_lastSize = frameContentSize;
}
desc.Width = m_lastSize.Width;
desc.Height = m_lastSize.Height;
ID3D11Texture2D* stagingTexture;
d3dDevice.get()->CreateTexture2D(&desc, nullptr, &stagingTexture);
auto frameSurface = GetDXGIInterfaceFromObject<ID3D11Texture2D>(frame.Surface());
m_d3dContext->CopyResource(stagingTexture, frameSurface.get());
D3D11_MAPPED_SUBRESOURCE mappedTex;
m_d3dContext->Map(stagingTexture, 0, D3D11_MAP_READ, 0, &mappedTex);
m_d3dContext->Unmap(stagingTexture, 0);
cv::Mat c = cv::Mat(desc.Height, desc.Width, CV_8UC4, mappedTex.pData, mappedTex.RowPitch);
cv::imshow("show", c);
stagingTexture->Release();
DXGI_PRESENT_PARAMETERS presentParameters = { 0 };
if (newSize)
{
m_framePool.Recreate(
m_device,
DirectXPixelFormat::B8G8R8A8UIntNormalized,
2,
m_lastSize);
}
}

Related

Rendering frames from a webcam to a DirectX 11 texture

I am having difficulty updating a DirectX 11 texture with the image data from a webcam frame buffer in memory. I've managed to create a texture from a single frame in the buffer but as the buffer is overwritten with the next frame the texture doesn't update. So I'm left with a snap shot image rather than a live stream which I'm after.
I am trying to use the Map/Unmap methods for updating an ID3D11Texture2D resource because that is supposedly more efficient than using the UpdateSubresource method. I haven't managed to get either to work. I'm new to DirectX and I just can't find a good explanation anywhere on how to accomplish this.
Create texture here:
bool CreateCamTexture(ID3D11ShaderResourceView** out_srv, RGBQUAD* ptrimg, int* image_width, int* image_height)
{
ZeroMemory(&desc, sizeof(desc));
desc.Width = *image_width;
desc.Height = *image_height;
desc.MipLevels = 1;
desc.ArraySize = 1;
desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
desc.SampleDesc.Count = 1;
desc.Usage = D3D11_USAGE_DYNAMIC;
desc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
desc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
std::cout << ptrimg << std::endl;
subResource.pSysMem = ptrimg;
subResource.SysMemPitch = desc.Width * 4;
subResource.SysMemSlicePitch = 0;
g_pd3dDevice->CreateTexture2D(&desc, &subResource, &pTexture);
ZeroMemory(&srvDesc, sizeof(srvDesc));
srvDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
srvDesc.Texture2D.MipLevels = desc.MipLevels;
srvDesc.Texture2D.MostDetailedMip = 0;
g_pd3dDevice->CreateShaderResourceView(pTexture, &srvDesc, out_srv);
if (pTexture != NULL) {
pTexture->Release();
}
else
{
std::cout << "pTexture is NULL ShaderResourceView not created" << std::endl;
}
return true;
}
bool CreateDeviceD3D(HWND hWnd)
{
// Setup swap chain
DXGI_SWAP_CHAIN_DESC sd;
ZeroMemory(&sd, sizeof(sd));
sd.BufferCount = 2;
sd.BufferDesc.Width = 0;
sd.BufferDesc.Height = 0;
sd.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
sd.BufferDesc.RefreshRate.Numerator = 60;
sd.BufferDesc.RefreshRate.Denominator = 1;
sd.Flags = DXGI_SWAP_CHAIN_FLAG_ALLOW_MODE_SWITCH;
sd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
sd.OutputWindow = hWnd;
sd.SampleDesc.Count = 1;
sd.SampleDesc.Quality = 0;
sd.Windowed = TRUE;
sd.SwapEffect = DXGI_SWAP_EFFECT_DISCARD;
UINT createDeviceFlags = 0;
createDeviceFlags |= D3D11_CREATE_DEVICE_DEBUG;
D3D_FEATURE_LEVEL featureLevel;
const D3D_FEATURE_LEVEL featureLevelArray[2] = { D3D_FEATURE_LEVEL_11_0, D3D_FEATURE_LEVEL_10_0, };
Attempting to Map/Unmap texture:
void UpdateCamTexture() {
D3D11_MAPPED_SUBRESOURCE mappedResource;
ZeroMemory(&mappedResource, sizeof(D3D11_MAPPED_SUBRESOURCE));
g_pd3dDeviceContext->Map(
pTexture,
0, //0,
D3D11_MAP_WRITE_DISCARD,
0,
&mappedResource);
memcpy(mappedResource.pData, listener_instance.pImgData, sizeof(listener_instance.pImgData));
// Reenable GPU access to the vertex buffer data.
g_pd3dDeviceContext->Unmap(pTexture, 0);
std::cout << "texture updated" << std::endl;
}
I don't get an error, the image is just black. I don't have debug layer enabled though.
Calling sizeof on a pointer listener_instance.pImgData is not what you want since it returns the size of a pointer type (8 on x64 architecture) and not size of array pointed by the pointer. Calling memcpy with the image data size in bytes is also not completely correct solution. See here for more details.
I will copy the answer from there just in case it's deleted.
Maximus Minimus's answer:
Check the returned pitch from your map call - you're assuming it's width * 4 (for 32-bit RGBA) but it may not be (particularly if your texture is not a power of 2 or it's width is not a multiple of 4).
You can only memcpy the entire block in one operation if pitch is equal to width * number of bytes in the format. Otherwise you must memcpy one row at a time.
Sample code, excuse C-isms:
assumes that src and dst are 32-bit RGBA data
unsigned *src; // this comes from whatever your input is
unsigned *dst = (unsigned *) msr.pData; // msr is a D3D11_MAPPED_SUBRESOURCE derived from ID3D11DeviceContext::Map
width and height come from ID3D11Texture2D::GetDesc
for (int i = 0; i < height; i++)
{
memcpy (dst, src, width * 4); // copy one row at a time because msr.RowPitch may be != (width * 4)
dst += msr.RowPitch >> 2; // msr.RowPitch is in bytes so for 32-bit data we divide by 4 (or downshift by 2, same thing)
src += width; // assumes pitch of source data is equal to width * 4
}
You can, of course, also include a test for if (msr.RowPitch == width * 4) and do a single memcpy of the entire thing if it's true.

Transparent glitches while drawing GIF

I wrote some solution to draw gifs, using Direct2D GIF sample:
// somewhere in render loop
IWICBitmapFrameDecode* frame = nullptr;
IWICFormatConverter* converter = nullptr;
gifDecoder->GetFrame(current_frame, &frame);
if (frame)
{
d2dWICFactory->CreateFormatConverter(&converter);
if (converter)
{
ID2D1Bitmap* temp = nullptr;
converter->Initialize(frame, GUID_WICPixelFormat32bppPBGRA,
WICBitmapDitherTypeNone, NULL, 0.f, WICBitmapPaletteType::WICBitmapPaletteTypeCustom);
tar->CreateBitmapFromWicBitmap(converter, NULL, &temp);
scale->SetValue(D2D1_SCALE_PROP_SCALE, D2D1::Vector2F(1, 1));
scale->SetInput(0, temp);
target->DrawImage(scale);
SafeRelease(&temp);
}
}
SafeRelease(&frame);
SafeRelease(&converter);
Where gifDecoder is obtained this way:
d2dWICFactory->CreateDecoderFromFilename(pathToGif, NULL, GENERIC_READ, WICDecodeMetadataCacheOnLoad, &gifDecoder);
But in my program almost with all gifs, all frames, except the first one, for some reason have horizontal transparent lines. Obviously, when I view the same gif in browser or in another program there are no that holes.
I tried to change pixel format, pallet type, but that glitches are still there.
What do I do wrong?
Basic steps:
On image load, in case of gif, for caching purposes store not all frames as ready to draw ID2D1Bitmaps but only the image's decoder. Every time decode the frame, get bitmap and draw it. At least, at first animation loop frames bitmaps and metadata (again, for each frame) can be cached.
For each gif create compatible target with the size of gif screen (obtained from metadata) for composing.
Compose animation on render: get metadata for current frame -> if current frame has disposal = true, clear compose target -> draw current frame to compose target (even if disposal true) -> draw compose target to render target,
Note, that frames may have different sizes and even offsets.
Approximate code:
ID2D1DeviceContext *renderTarget;
class GIF
{
ID2D1BitmapRenderTarget *compose;
IWICBitmapDecoder *decoder; // it seems like precache all frames costs too much time, it will be faster to obtain each frame each time (maybe cache during first loop)
float naturalWidth; // gif screen width
float naturalHeight; // gif screen height
int framesCount;
int currentFrame;
unsigned int lastFrameEpoch;
int x;
int y;
int width; // image width needed to be drawn
int height; // image height
float fw; // scale factor width;
float fh; // scale factor height
GIF(const wchar_t *path, int x, int y, int width, int height)
{
// Init, using WIC obtain here image's decoder, naturalWidth, naturalHeight
// framesCount and other optional stuff like loop information
// using IWICMetadataQueryReader obtained from the whole image,
// not from zero frame
compose = nullptr;
currentFrame = 0;
lastFrameEpoch = epoch(); // implement Your millisecond function
Resize(width, height);
}
void Resize(int width, int height)
{
this->width = width;
this->height = height;
// calculate scale factors
fw = width / naturalWidth;
fh = height / naturalHeight;
if(compose == nullptr)
{
renderTarget->CreateCompatibleRenderTarget(D2D1::SizeF(naturalWidth, naturalHeight), &compose);
compose->Clear(D2D1::ColorF(0,0,0,0));
}
}
void Render()
{
IWICBitmapFrameDecode* frame = nullptr;
decoder->GetFrame(currentFrame, &frame);
IWICFormatConverter* converter = nullptr;
d2dWICFactory->CreateFormatConverter(&converter);
converter->Initialize(frame, GUID_WICPixelFormat32bppPBGRA,
WICBitmapDitherTypeNone, NULL, 0.f, WICBitmapPaletteType::WICBitmapPaletteTypeCustom);
IWICMetadataQueryReader* reader = nullptr;
frame->GetMetadataQueryReader(&reader);
PROPVARIANT propValue;
PropVariantInit(&propValue);
char disposal = 0;
float l = 0, t = 0; // frame offsets (may have)
HRESULT hr = S_OK;
reader->GetMetadataByName(L"/grctlext/Delay", &propValue);
if (SUCCEEDED((propValue.vt == VT_UI2 ? S_OK : E_FAIL)))
{
UINT frameDelay = 0;
UIntMult(propValue.uiVal, 10, &fr);
frameDelay = fr;
if (frameDelay < 16)
{
frameDelay = 16;
}
if(epoch() - lastFrameEpoch < frameDelay)
{
SafeRelease(&reader);
SafeRelease(&frame);
SafeRelease(&converter);
return;
}
}
if (SUCCEEDED(reader->GetMetadataByName(
L"/grctlext/Disposal",
&propValue)))
{
hr = (propValue.vt == VT_UI1) ? S_OK : E_FAIL;
if (SUCCEEDED(hr))
{
disposal = propValue.bVal;
}
}
PropVariantClear(&propValue);
{
hr = reader->GetMetadataByName(L"/imgdesc/Left", &propValue);
if (SUCCEEDED(hr))
{
hr = (propValue.vt == VT_UI2 ? S_OK : E_FAIL);
if (SUCCEEDED(hr))
{
l = static_cast<FLOAT>(propValue.uiVal);
}
PropVariantClear(&propValue);
}
}
{
hr = reader->GetMetadataByName(L"/imgdesc/Top", &propValue);
if (SUCCEEDED(hr))
{
hr = (propValue.vt == VT_UI2 ? S_OK : E_FAIL);
if (SUCCEEDED(hr))
{
t = static_cast<FLOAT>(propValue.uiVal);
}
PropVariantClear(&propValue);
}
}
renderTarget->CreateBitmapFromWicBitmap(converter, NULL, &temp);
compose->BeginDraw();
if (disposal == 2)
compose->Clear(D2D1::ColorF(0, 0, 0, 0));
auto ss = temp->GetSize();
compose->DrawBitmap(temp, D2D1::RectF(l, t, l + ss.width, t + ss.height));
compose->EndDraw();
ID2D1Bitmap* composition = nullptr;
compose->GetBitmap(&composition);
// You need to create scale effect to have antialised resizing
scale->SetValue(D2D1_SCALE_PROP_SCALE, D2D1::Vector2F(fw, fh));
scale->SetInput(0, composition);
auto p = D2D1::Point2F(x, y);
renderTarget->DrawImage(iscale, p);
SafeRelease(&temp);
SafeRelease(&composition);
SafeRelease(&reader);
SafeRelease(&frame);
SafeRelease(&converter);
}
}*gif[100] = {nullptr};
inline void Render()
{
renderTarget->BeginDraw();
for(int i = 0; i < 100 && gif[i] != nullptr; i++)
gif[i]->Render();
renderTarget->EndDraw();
}

trying to copy pixal data from cpu to gpu using map every thing runs fine but the screen comes empty directx

in my TextureHendler class, I create an empty texture using this
D3D11_TEXTURE2D_DESC textureDesc = { 0 };
textureDesc.Width = textureWidth;
textureDesc.Height = textureHeight;
textureDesc.Format = dxgiFormat;
textureDesc.Usage = D3D11_USAGE_DYNAMIC;
textureDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
textureDesc.MiscFlags = 0;
textureDesc.MipLevels = 1;
textureDesc.ArraySize = 1;
textureDesc.SampleDesc.Count = 1;
textureDesc.SampleDesc.Quality = 0;
textureDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
hr = m_deviceResource->GetD3DDevice()->CreateTexture2D(
&textureDesc,
nullptr,
&texture);
and on runtime, I want to load data from CPU memory using the code below
Platform::Array<unsigned char>^ datapointer = GetPixelBytes();
static constexpr int BYTES_IN_RGBA_PIXEL = 4;
auto rowspan = BYTES_IN_RGBA_PIXEL * width;
D3D11_MAPPED_SUBRESOURCE ms;
auto device_context = m_deviceResource->GetD3DDeviceContext();
device_context->Map(m_texture->GetTexture2d(), 0, D3D11_MAP_WRITE_DISCARD, 0, &ms);
uint8_t* mappedData = reinterpret_cast<uint8_t*>(ms.pData);
for (int i = 0; i < datapointer->Length; i++)
{
mappedData[i] = datapointer[i];
}
device_context->Unmap(m_texture->GetTexture2d(), 0);
everything runs fine but the output screen comes black
Update :
std::shared_ptr<TextureHendler> m_texture;
TextureHendler Holds
public:
ID3D11Texture2D* GetTexture2d() { return texture.Get(); }
private:
Microsoft::WRL::ComPtr<ID3D11Texture2D> texture;
and a load function which content is shown above
here are the sample code https://github.com/AbhishekSharma-SEG/Demo_DXPlayer thanks for the help

fill texture3d slice wise

I try to fill a texture3D slice wise with 2d images.
my result only gives me back the first of the 6 images like you can see in this picture:
to be sure that it is not a render problem like wrong uvw coordinates I also give you the picture of the uvw coordinates:
here is the code of the creation of the texture3d:
if (ETextureType::Texture3D == TextureType)
{
ID3D11Texture3D* pTexture3D = nullptr;
D3D11_TEXTURE3D_DESC TextureDesc;
ZeroMemory(&TextureDesc, sizeof(TextureDesc));
TextureDesc.Width = nWidth;
TextureDesc.Height = nHeight;
TextureDesc.MipLevels = nMipMaps;
TextureDesc.Depth = nDepth;
TextureDesc.Usage = D3D11_USAGE_DEFAULT;
TextureDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
switch (TextureFormat)
{
case ETextureFormat::R8G8B8A8:
{
TextureDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
}
break;
case ETextureFormat::R32FG32FB32FA32F:
{
TextureDesc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT;
}
break;
default:
DebugAssertOnce(UNKNOWN_TEXTURE_FORMAT);
}
HRESULT hr = m_pD3D11Device->CreateTexture3D(&TextureDesc, nullptr, &pTexture3D);
if (FAILED(hr))
{
DebugAssertOnce(UNABLE_TO_CREATE_TEXTURE);
return false;
}
if (bCreateShaderResourceView)
{
D3D11_SHADER_RESOURCE_VIEW_DESC SRVDesc;
ZeroMemory(&SRVDesc, sizeof(SRVDesc));
SRVDesc.Format = TextureDesc.Format;
SRVDesc.Texture3D.MipLevels = TextureDesc.MipLevels;
SRVDesc.Texture3D.MostDetailedMip = 0;
SRVDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE3D;
hr = m_pD3D11Device->CreateShaderResourceView(pTexture3D, &SRVDesc, &pShaderResourceView);
if (FAILED(hr))
{
pTexture3D->Release();
DebugAssertOnce(UNABLE_TO_CREATE_SHADER_RESOURCE_VIEW);
return false;
}
}
else if (bCreateRenderTargetView)
{
ID3D11RenderTargetView* pRenderTargetView = nullptr;
hr = m_pD3D11Device->CreateRenderTargetView(pTexture3D, nullptr, &pRenderTargetView);
if (FAILED(hr))
{
pShaderResourceView->Release();
pTexture3D->Release();
DebugAssertOnce(UNABLE_TO_CREATE_RENDERTARGET_VIEW);
return false;
}
pView = pRenderTargetView;
}
*ppTexture = new CTextureDX11(TextureType, pTexture3D, pShaderResourceView, pView);
return true;
}
and also the filling part:
bool CGraphicsDriverDX11::CreateTexture3DFromImageBuffers(CTexture** ppTexture, const std::vector<CImageBuffer*>* pvecImageBuffers)
{
uint32_t nWidth = pvecImageBuffers->front()->GetWidth();
uint32_t nHeight = pvecImageBuffers->front()->GetHeight();
uint32_t nMipMapLevels = 1;
bool bRet = CreateTexture(ppTexture, nWidth, nHeight, ETextureType::Texture3D, ETextureFormat::R8G8B8A8, nMipMapLevels, false, true, false, static_cast<UINT>(pvecImageBuffers->size()));
if (bRet)
{
ID3D11Texture3D* pD3DTexture = static_cast<ID3D11Texture3D*>((*ppTexture)->GetTexture());
for (size_t nImageBuffer = 0; nImageBuffer < pvecImageBuffers->size(); ++nImageBuffer)
{
uint32_t nIndex = D3D11CalcSubresource(static_cast<UINT>(nImageBuffer), 0, 1);
m_pD3D11DeviceContext->UpdateSubresource(pD3DTexture, nIndex, nullptr, pvecImageBuffers->at(nImageBuffer)->GetData(), nWidth * 4, 0);
}
}
return bRet;
}
I tried a lot... for example I changed this code to texture2DArray and it worked fine. so the mistake is not my CImageBuffer class. also the nDepth variable has the correct value... I think I have to use another command for UpdateSubresource or at least change the parameters somehow.
I also didn't find some examples in the internet.
Thank you in advanced
All the depth textures in a slice are side-by-side in a single subresource. You also need to compute how many depth images are present in a slice for a given miplevel.
This gives you the subresource index which contains the entire slice:
D3D11CalcSubresource(level, 0, mipLevels);
This gives you the number of images in a slice for a given miplevel:
std::max(depth >> level, 1);
Each image in the slice has a pitch of D3D11_MAPPED_SUBRESOURCE.RowPitch laid out one after another in the subresource, with the total size in bytes of the slice as D3D11_MAPPED_SUBRESOURCE.DepthPitch.
For example, here is some code from DirectXTex trimmed down a bit to make it easier to read. It is reading the data out of a captured 3D volume texture, but the logic is the same when filling out textures.
if (metadata.IsVolumemap())
{
assert(metadata.arraySize == 1);
size_t height = metadata.height;
size_t depth = metadata.depth;
for (size_t level = 0; level < metadata.mipLevels; ++level)
{
UINT dindex = D3D11CalcSubresource(level, 0, metadata.mipLevels);
D3D11_MAPPED_SUBRESOURCE mapped;
HRESULT hr = pContext->Map(pSource, dindex, D3D11_MAP_READ, 0, &mapped);
if (FAILED(hr))
// error
auto pslice = reinterpret_cast<const uint8_t*>(mapped.pData);
size_t lines = ComputeScanlines(metadata.format, height);
// For uncompressed images, lines == height
for (size_t slice = 0; slice < depth; ++slice)
{
const Image* img = result.GetImage(level, 0, slice);
const uint8_t* sptr = pslice;
uint8_t* dptr = img->pixels;
for (size_t h = 0; h < lines; ++h)
{
size_t msize = std::min<size_t>(img->rowPitch, mapped.RowPitch);
memcpy_s(dptr, img->rowPitch, sptr, msize);
sptr += mapped.RowPitch;
dptr += img->rowPitch;
}
pslice += mapped.DepthPitch;
}
pContext->Unmap(pSource, dindex);
if (height > 1)
height >>= 1;
if (depth > 1)
depth >>= 1;
}
}

ID2D1Bitmap1::Map issue

I have an ID2D1BitmapBrush. What i'm trying to achieve is to get its image data. This is what I've tried:
// imageBrush - ID2D1BitmapBrush
// topLeft is a d2d1_point2u (0,0)
// uBounds is image bounding rectangle
Microsoft::WRL::ComPtr<ID2D1Bitmap> tmpBitmap;
Microsoft::WRL::ComPtr<ID2D1Bitmap1> tmpBitmap1;
D2D1_MAPPED_RECT bitmapData;
imageBrush->GetBitmap(tmpBitmap.ReleaseAndGetAddressOf());
///Creating a new bitmap
D2D1_SIZE_U dimensions;
dimensions.height = uBounds.bottom;
dimensions.width = uBounds.right;
D2D1_BITMAP_PROPERTIES1 d2dbp;
D2D1_PIXEL_FORMAT d2dpf;
FLOAT dpiX = 0;
FLOAT dpiY = 0;
d2dpf.format = DXGI_FORMAT_R8G8B8A8_UNORM;
d2dpf.alphaMode = D2D1_ALPHA_MODE_PREMULTIPLIED;
d2dFactory->GetDesktopDpi(&dpiX, &dpiY);
d2dbp.pixelFormat = d2dpf;
d2dbp.dpiX = dpiX;
d2dbp.dpiY = dpiY;
d2dbp.bitmapOptions = D2D1_BITMAP_OPTIONS_CPU_READ | D2D1_BITMAP_OPTIONS_CANNOT_DRAW;
d2dbp.colorContext = nullptr;
HRESULT hr = d2dCtx->CreateBitmap(dimensions, nullptr, 0, d2dbp, tmpBitmap1.GetAddressOf());
/// Getting image data
tmpBitmap1.Get()->CopyFromBitmap(&topLeft, tmpBitmap.Get(), &uBounds);
tmpBitmap1.Get()->Map(D2D1_MAP_OPTIONS_READ, &bitmapData);
The problem is - at the end bitmapData.bits is empty.
Where did i get it wrong?
After starting with your code above, and providing a proper bitmap brush, only a couple modification were required (kept your variable names, etc).
ID2D1Factory1* d2dFactory = /* [ get current factory ] */;
ASSERT(d2dFactory);
ID2D1DeviceContext* d2dCtx = /* [ get current target ] */;
ASSERT(d2dCtx);
Microsoft::WRL::ComPtr<ID2D1Bitmap> tmpBitmap { };
Microsoft::WRL::ComPtr<ID2D1Bitmap1> tmpBitmap1 { };
D2D1_MAPPED_RECT bitmapData { };
imageBrush->GetBitmap(tmpBitmap.ReleaseAndGetAddressOf());
ASSERT(tmpBitmap);
D2D1_SIZE_F const bitmapSize = tmpBitmap->GetSize();
ASSERT(bitmapSize.width > 0.0f);
ASSERT(bitmapSize.height > 0.0f);
D2D1_RECT_U const uBounds
{
0u,
0u,
static_cast<UINT32>(bitmapSize.width),
static_cast<UINT32>(bitmapSize.width)
};
D2D1_SIZE_U const dimensions
{
uBounds.bottom,
uBounds.right
};
D2D1_BITMAP_PROPERTIES1 d2dbp { };
D2D1_PIXEL_FORMAT d2dpf { };
FLOAT dpiX = 0.0f;
FLOAT dpiY = 0.0f;
// This modification to the color format was required
// to correlate with my swap chain format. Otherwise
// you will receive an error [E_INVALIDARG ] when
// calling CopyFromBitmap (as you had described)
d2dpf.format = DXGI_FORMAT_B8G8R8A8_UNORM; // <-- previously DXGI_FORMAT_R8G8B8A8_UNORM;
d2dpf.alphaMode = D2D1_ALPHA_MODE_PREMULTIPLIED;
d2dbp.pixelFormat = d2dpf;
d2dFactory->GetDesktopDpi(&dpiX, &dpiY);
d2dbp.dpiX = dpiX;
d2dbp.dpiY = dpiY;
d2dbp.bitmapOptions = D2D1_BITMAP_OPTIONS_CPU_READ | D2D1_BITMAP_OPTIONS_CANNOT_DRAW;
d2dbp.colorContext = nullptr;
HRESULT hr = S_OK;
// [ omitting the interrogation of hr ]
D2D1_POINT_2U const topLeft { uBounds.left, uBounds.top };
hr = d2dCtx->CreateBitmap(dimensions, nullptr, 0u, d2dbp, tmpBitmap1.GetAddressOf());
hr = tmpBitmap1.Get()->CopyFromBitmap(&topLeft, tmpBitmap.Get(), &uBounds);
hr = tmpBitmap1.Get()->Map(D2D1_MAP_OPTIONS_READ, &bitmapData);
ASSERT(bitmapData.pitch == /* [call funtion to determine pitch] */);
ASSERT(bitmapData.bits != nullptr);
// [ ... code to further utilize bits ]
The primary change was to ensure the pixel format correlated with the format set in the swap chain to avoid the error you described (E_INVALIDARG One or more arguments are invalid) when calling CopyFromBitmap.
Additional code was used to validate the resulting D2D1_MAPPED_RECT (pitch and bits). Hope this helps.