ID2D1Bitmap1::Map, when can you use it? - c++

So I recently went through and converted a simple test app I wrote to use the new version of Direct2D, which means I basically copied the relevant parts of the Direct2D Quickstart for Windows 8. That worked, in that my application behaved as before (just draws a bunch of pixels.)
Previously, to update the bitmap, I was doing the following:
for(int i = 0; i < 1000; ++i )
{
int x = rand()%600;
int y = rand()%600;
int index = 4 * ( x + ( y * 600 ) );
imageData[index] = rand()%256;
imageData[index+2] = 0;
}
D2D1_RECT_U rect2 = RectU(0,0,600,600);
pBitmap->CopyFromMemory(&rect2, imageData, 600*4);
where imageData is just:
imageData = new byte[600*600*4];
That still worked, but I thought that since I've got this nifty Map method on my shiny new ID2D1Bitmap1 interface that I could get rid of that CPU-side array and do something like:
D2D1_MAPPED_RECT* mapped = NULL;
ThrowIfFailed( pBitmap->Map( D2D1_MAP_OPTIONS_WRITE, mapped ) );
for(int i = 0; i < 1000; ++i)
{
int x = rand()%600;
int y = rand()%600;
int index = 4 * ( x + ( y * 600 ) );
mapped->bits[index] = rand()%256;
mapped->bits[index+2] = 0;
}
ThrowIfFailed(pBitmap->Unmap());
This failed at the call to Map with E_INVALIDARG, every time, using various combinations of D2D1_BITMAP_OPTIONS in the D2D1_BITMAP_PROPERTIES1 passed to CreateBitmap and D2D1_MAP_OPTIONS in the call to Map.
Looking at the description of the D2D1_MAP_OPTIONS enumeration it appears that none of the 3 options (READ, WRITE, DISCARD) can actually be used on bitmaps I create with the Direct2D context...
How do I get, in Direct2D, a bitmap which I can map, write to, unmap and draw?

Your problem is that your mapped pointer shouldn't be a null pointer. I suggest to change your code according to the following:
D2D1_MAPPED_RECT mapped;
ThrowIfFailed( pBitmap->Map( D2D1_MAP_OPTIONS_WRITE, &mapped ) );

Recently i dug into this and faced the same problem. As far as i understand D2D Bitmap cannot be locked for writing for CPU. More than that you cant create Bitmap both for write with D2D and read with CPU.
And i'd like to read&write some byte array with CPU and D2D api. But unfortunately i have to use 2 bitmaps created with different bitmapOptions. First suitable for D2D api and could be render target for context
props.bitmapOptions = D2D1_BITMAP_OPTIONS_CANNOT_DRAW | D2D1_BITMAP_OPTIONS_TARGET;
second can be read with CPU
props.bitmapOptions = D2D1_BITMAP_OPTIONS_CANNOT_DRAW | D2D1_BITMAP_OPTIONS_CPU_READ;
And usage scenario is "render primitives to first, use ID2D1Bitmap1::CopyFromBitmap to get data from second (with map/unmap)"

Related

NVencs Output Bitstream is not readable

I have one question related to Nvidias NVenc API. I want to use the API to encode some OpenGL graphics. My problem is, that the API reports no error throughout the whole program, everything seems to be fine. But the generated output is not readable by, e.g. VLC. If I try to play the generated file, VLC would flash a black screen for about 0.5s, then ends the playback.
The Video has the length of 0, the size of the Vid seems rather small, too.
Resolution is 1280*720 and the size of 5secs recording is only 700kb. Is this realistic?
The flow of the application is as following:
Render to secondary Framebuffer
Download Framebuffer to one of two PBOs (glReadPixels())
Map the PBO of the previous frame, to get a pointer understandable by Cuda.
Call a simple CudaKernel converting OpenGLs RGBA to ARGB which should be understandable by NVenc according to this(p.18). The kernel reads the content of the PBO and writes the converted content into a CudaArray (created with cudaMalloc) which is registered as InputResource with NVenc.
The content of the converted Array gets encoded. A completion event plus the corresponding output bitstream buffer get queued.
A secondary thread listens on the queued output events, if one event is signaled, the Output Bitstream gets mapped and written to hdd.
The initializion of NVenc-Encoder:
InitParams* ip = new InitParams();
m_initParams = ip;
memset(ip, 0, sizeof(InitParams));
ip->version = NV_ENC_INITIALIZE_PARAMS_VER;
ip->encodeGUID = m_encoderGuid; //Used Codec
ip->encodeWidth = width; // Frame Width
ip->encodeHeight = height; // Frame Height
ip->maxEncodeWidth = 0; // Zero means no dynamic res changes
ip->maxEncodeHeight = 0;
ip->darWidth = width; // Aspect Ratio
ip->darHeight = height;
ip->frameRateNum = 60; // 60 fps
ip->frameRateDen = 1;
ip->reportSliceOffsets = 0; // According to programming guide
ip->enableSubFrameWrite = 0;
ip->presetGUID = m_presetGuid; // Used Preset for Encoder Config
NV_ENC_PRESET_CONFIG presetCfg; // Load the Preset Config
memset(&presetCfg, 0, sizeof(NV_ENC_PRESET_CONFIG));
presetCfg.version = NV_ENC_PRESET_CONFIG_VER;
presetCfg.presetCfg.version = NV_ENC_CONFIG_VER;
CheckApiError(m_apiFunctions.nvEncGetEncodePresetConfig(m_Encoder,
m_encoderGuid, m_presetGuid, &presetCfg));
memcpy(&m_encodingConfig, &presetCfg.presetCfg, sizeof(NV_ENC_CONFIG));
// And add information about Bitrate etc
m_encodingConfig.rcParams.averageBitRate = 500000;
m_encodingConfig.rcParams.maxBitRate = 600000;
m_encodingConfig.rcParams.rateControlMode = NV_ENC_PARAMS_RC_MODE::NV_ENC_PARAMS_RC_CBR;
ip->encodeConfig = &m_encodingConfig;
ip->enableEncodeAsync = 1; // Async Encoding
ip->enablePTD = 1; // Encoder handles picture ordering
Registration of CudaResource
m_cuContext->SetCurrent(); // Make the clients cuCtx current
NV_ENC_REGISTER_RESOURCE res;
memset(&res, 0, sizeof(NV_ENC_REGISTER_RESOURCE));
NV_ENC_REGISTERED_PTR resPtr; // handle to the cuda resource for future use
res.bufferFormat = m_inputFormat; // Format is ARGB
res.height = m_height;
res.width = m_width;
// NOTE: I've set the pitch to the width of the frame, because the resource is a non-pitched
//cudaArray. Is this correct? Pitch = 0 would produce no output.
res.pitch = pitch;
res.resourceToRegister = (void*) (uintptr_t) resourceToRegister; //CUdevptr to resource
res.resourceType =
NV_ENC_INPUT_RESOURCE_TYPE::NV_ENC_INPUT_RESOURCE_TYPE_CUDADEVICEPTR;
res.version = NV_ENC_REGISTER_RESOURCE_VER;
CheckApiError(m_apiFunctions.nvEncRegisterResource(m_Encoder, &res));
m_registeredInputResources.push_back(res.registeredResource);
Encoding
m_cuContext->SetCurrent(); // Make Clients context current
MapInputResource(id); //Map the CudaInputResource
NV_ENC_PIC_PARAMS temp;
memset(&temp, 0, sizeof(NV_ENC_PIC_PARAMS));
temp.version = NV_ENC_PIC_PARAMS_VER;
unsigned int currentBufferAndEvent = m_counter % m_registeredEvents.size(); //Counter is inc'ed in every Frame
temp.bufferFmt = m_currentlyMappedInputBuffer.mappedBufferFmt;
temp.inputBuffer = m_currentlyMappedInputBuffer.mappedResource; //got set by MapInputResource
temp.completionEvent = m_registeredEvents[currentBufferAndEvent];
temp.outputBitstream = m_registeredOutputBuffers[currentBufferAndEvent];
temp.inputWidth = m_width;
temp.inputHeight = m_height;
temp.inputPitch = m_width;
temp.inputTimeStamp = m_counter;
temp.pictureStruct = NV_ENC_PIC_STRUCT_FRAME; // According to samples
temp.qpDeltaMap = NULL;
temp.qpDeltaMapSize = 0;
EventWithId latestEvent(currentBufferAndEvent,
m_registeredEvents[currentBufferAndEvent]);
PushBackEncodeEvent(latestEvent); // Store the Event with its ID in a Queue
CheckApiError(m_apiFunctions.nvEncEncodePicture(m_Encoder, &temp));
m_counter++;
UnmapInputResource(id); // Unmap
Every little hint, where to look at, is very much appreciated. I'm running out of ideas what might be wrong.
Thanks a lot!
With the help of hall822 from the nvidia forums I managed to solve the issue.
The primary error was that I registered my cuda resource with a pitch equal to the size of the frame. I'm using a Framebuffer-Renderbuffer to draw my content into. The data of this is a plain, unpitched array. My first thought, giving a pitch equal to zero, failed. The encoder did nothing. The next idea was to set it to the width of the frame, a quarter of the image was encoded.
// NOTE: I've set the pitch to the width of the frame, because the resource is a non-pitched
//cudaArray. Is this correct? Pitch = 0 would produce no output.
res.pitch = pitch;
To answer this question: Yes, it is correct. But the pitch is measured in byte. So because I'm encoding RGBA-Frames, the correct pitch has to be FRAME_WIDTH * 4.
The second error was that my color channels were not right (See point 4 in my opening post). The NVidia enum says that the encoder expects the channels in ARGB format but actually ment is BGRA, so the alpha channel which is always 255 polluted the blue channel.
Edit: This may be due to the fact that NVidia is using little endian internally. I'm writing
my pixel data to a byte array, choosing an other type like int32 may allow one to pass actual ARGB data.

Problems to understand DXGI DirectX 11 Desktop Duplication to get a Buffer or Array

I want to understand DXGI Desktop Duplication. I have read a lot and this is the code I copied from parts of the DesktopDuplication sample on the Microsoft Website. My plan is to get the Buffer or Array from the DesktopImage because I want to make a new Texture for an other program. I hope somebody can explain me what I can do to get it.
void DesktopDublication::GetFrame(_Out_ FRAME_DATA* Data, _Out_ bool* Timeout)
{
IDXGIResource* DesktopResource = nullptr;
DXGI_OUTDUPL_FRAME_INFO FrameInfo;
// Get new frame
HRESULT hr = m_DeskDupl->AcquireNextFrame(500, &FrameInfo, &DesktopResource);
if (hr == DXGI_ERROR_WAIT_TIMEOUT)
{
*Timeout = true;
}
*Timeout = false;
if (FAILED(hr))
{
}
// If still holding old frame, destroy it
if (m_AcquiredDesktopImage)
{
m_AcquiredDesktopImage->Release();
m_AcquiredDesktopImage = nullptr;
}
// QI for IDXGIResource
hr = DesktopResource->QueryInterface(__uuidof(ID3D11Texture2D), reinterpret_cast<void **>(&m_AcquiredDesktopImage));
DesktopResource->Release();
DesktopResource = nullptr;
if (FAILED(hr))
{
}
// Get metadata
if (FrameInfo.TotalMetadataBufferSize)
{
// Old buffer too small
if (FrameInfo.TotalMetadataBufferSize > m_MetaDataSize)
{
if (m_MetaDataBuffer)
{
delete[] m_MetaDataBuffer;
m_MetaDataBuffer = nullptr;
}
m_MetaDataBuffer = new (std::nothrow) BYTE[FrameInfo.TotalMetadataBufferSize];
if (!m_MetaDataBuffer)
{
m_MetaDataSize = 0;
Data->MoveCount = 0;
Data->DirtyCount = 0;
}
m_MetaDataSize = FrameInfo.TotalMetadataBufferSize;
}
UINT BufSize = FrameInfo.TotalMetadataBufferSize;
// Get move rectangles
hr = m_DeskDupl->GetFrameMoveRects(BufSize, reinterpret_cast<DXGI_OUTDUPL_MOVE_RECT*>(m_MetaDataBuffer), &BufSize);
if (FAILED(hr))
{
Data->MoveCount = 0;
Data->DirtyCount = 0;
}
Data->MoveCount = BufSize / sizeof(DXGI_OUTDUPL_MOVE_RECT);
BYTE* DirtyRects = m_MetaDataBuffer + BufSize;
BufSize = FrameInfo.TotalMetadataBufferSize - BufSize;
// Get dirty rectangles
hr = m_DeskDupl->GetFrameDirtyRects(BufSize, reinterpret_cast<RECT*>(DirtyRects), &BufSize);
if (FAILED(hr))
{
Data->MoveCount = 0;
Data->DirtyCount = 0;
}
Data->DirtyCount = BufSize / sizeof(RECT);
Data->MetaData = m_MetaDataBuffer;
}
Data->Frame = m_AcquiredDesktopImage;
Data->FrameInfo = FrameInfo;
}
If I'm understanding you correctly, you want to get the current desktop image, duplicate it into a private texture, and then render that private texture onto your window. I would start by reading up on Direct3D 11 and learning how to render a scene, as you will need D3D to do anything with the texture object you get from DXGI. This, this, and this can get you started on D3D11. I would also spend some time reading through the source of the sample you copied your code from, as it completely explains how to do this. Here is the link to the full source code for that sample.
To actually get the texture data and render it out, you need to do the following:
1). Create a D3D11 Device object and a Device Context.
2). Write and compile a Vertex and Pixel shader for the graphics card, then load them into your application.
3). Create an Input Layout object and set it to the device.
4). Initialize the required Blend, Depth-Stencil, and Rasterizer states for the device.
5). Create a Texture object and a Shader Resource View object.
6). Acquire the Desktop Duplication texture using the above code.
7). Use CopyResource to copy the data into your texture.
8). Render that texture to the screen.
This will capture all data displayed on one of the desktops to your texture. It does not do processing on the dirty rects of the desktop. It does not do processing on moved regions. This is bare bones 'capture the desktop and display it elsewhere' code.
If you want to get more in depth, read the linked resources and study the sample code, as the sample basically does what you're asking for.
Since tacking this onto my last answer didn't feel quite right, I decided to create a second.
If you want to read the desktop data to a file, you need a D3D11 Device object, a texture object with the D3D11_USAGE_STAGING flag set, and a method of converting the RGBA pixel data of the desktop texture to whatever it is you want. The basic procedure is a simplified version of the one in my original answer:
1). Create a D3D11 Device object and a Device Context.
2). Create a Staging Texture with the same format as the Desktop Texture.
3). Use CopyResource to copy the Desktop Texture into your Staging Texture.
4). Use ID3D11DeviceContext::Map() to get a pointer to the data contained in the Staging Texture.
Make sure you know how Map works and make sure you can write out image files from a single binary stream. There may also be padding in the image buffer, so be aware you may also need to filter that out. Additionally, make sure you Unmap the buffer instead of calling free, as the buffer given to you almost certainly does not belong to the CRT.

wxWidgets (C++) fails to perform many wxDC.DrawPoint() operations

I would be very grateful for any kind of help. I develop an application using wxWidgets via wxDevC++ on Windows 7. I have a function that does some calculations and is supposed to produce a 2D colour plot of data acquired. It looks like this:
void draw2DColourPlot (wxDC *dc)
{
wxBitmap bmp;
bmp.Create (800, 400);
wxMemoryDC *memDC = new wxMemoryDC ( bmp );
ofstream dbgStr ( "interpolating.txt", std :: osftream :: out );
int progress = 0;
for ( int i = 0; i < 800; ++i )
{
for ( int j = 0; j < 400; ++j )
{
unsigned char r, g, b,;
// calculate values of r, g, b
memDC -> SetPen ( wxPen ( wxColor (r,g,b), 1 ) );
memDC -> DrawPoint ( i,j );
dbgStr << "Point ( " << i << ", " << j << " ) calculated" << '\n';
++progress;
updateProgressBar ( progress );
}
}
dc -> SetPen ( wxPen ( wxColor ( 255, 255, 255 ), 1 ) );
dc -> Clear ();
dc -> Blit ( 0, 0, 800, 400, memDC, 0, 0 );
return;
}
The problem is, that sometimes it does not work - the progress bar reaches some value (between 10 and 90 percent, as I've observed), then everything freezes for a couple of seconds and then DC goes blank (any previous content disappears). After a few times the proper result may be drawn, but it's not a rule. In "interpolating.txt" file the last line is "Point (799, 399) calculated".
Previously I didn't use wxMemoryDC - I used dc -> DrawPoint () directly and observed the same behaviour (points were drawn as expected, but at some point everything dissapeared).
It happens more often when executing on my laptop (also W7), but sometimes on PC, too.
Do you have any solution to that? Is there a chance, that I use wxWidgets incorrectly and it should be done the different way?
Thanks for any help.
I think your problem is due to memory/resource leaks: you allocate wxMemoryDC on the heap but never delete it and leaking DCs is particularly bad because they are a limited resource under Windows, so if you leak too many of them, you won't be able to create any any more. To fix this, just allocate it on the stack instead.
Secondly, while what you do is not wrong, it's horribly inefficient. For a simple improvement, set your pixels in wxImage, then convert it to wxBitmap at once. For a yet more efficient approach, use wxPixelData to set pixels directly in the bitmap. This will work much faster than what you do.

How do I grab frames from a video stream on Windows 8 modern apps?

I am trying to extract images out of a mp4 video stream. After looking stuff up, it seems like the proper way of doing that is using Media Foundations in C++ and open the frame/read stuff out of it.
There's very little by way of documentation and samples, but after some digging, it seems like some people have had success in doing this by reading frames into a texture and copying the content of that texture to a memory-readable texture (I am not even sure if I am using the correct terms here). Trying what I found though gives me errors and I am probably doing a bunch of stuff wrong.
Here's a short piece of code from where I try to do that (project itself attached at the bottom).
ComPtr<ID3D11Texture2D> spTextureDst;
MEDIA::ThrowIfFailed(
m_spDX11SwapChain->GetBuffer(0, IID_PPV_ARGS(&spTextureDst))
);
auto rcNormalized = MFVideoNormalizedRect();
rcNormalized.left = 0;
rcNormalized.right = 1;
rcNormalized.top = 0;
rcNormalized.bottom = 1;
MEDIA::ThrowIfFailed(
m_spMediaEngine->TransferVideoFrame(m_spRenderTexture.Get(), &rcNormalized, &m_rcTarget, &m_bkgColor)
);
//copy the render target texture to the readable texture.
m_spDX11DeviceContext->CopySubresourceRegion(m_spCopyTexture.Get(),0,0,0,0,m_spRenderTexture.Get(),0,NULL);
m_spDX11DeviceContext->Flush();
//Map the readable texture;
D3D11_MAPPED_SUBRESOURCE mapped = {0};
m_spDX11DeviceContext->Map(m_spCopyTexture.Get(),0,D3D11_MAP_READ,0,&mapped);
void* buffer = ::CoTaskMemAlloc(600 * 400 * 3);
memcpy(buffer, mapped.pData,600 * 400 * 3);
//unmap so we can copy during next update.
m_spDX11DeviceContext->Unmap(m_spCopyTexture.Get(),0);
// and the present it to the screen
MEDIA::ThrowIfFailed(
m_spDX11SwapChain->Present(1, 0)
);
}
The error I get is:
First-chance exception at 0x76814B32 in App1.exe: Microsoft C++ exception: Platform::InvalidArgumentException ^ at memory location 0x07AFF60C. HRESULT:0x80070057
I am not really sure how to pursue it further it since, like I said, there's very little docs about it.
Here's the modified sample I am working off of. This question is specific for WinRT (Windows 8 apps).
UPDATE success!! see edit at bottom
Some partial success, but maybe enough to answer your question. Please read on.
On my system, debugging the exception showed that the OnTimer() function failed when attempting to call TransferVideoFrame(). The error it gave was InvalidArgumentException.
So, a bit of Googling led to my first discovery - there is apparently a bug in NVIDIA drivers - which means the video playback seems to fail with 11 and 10 feature levels.
So my first change was in function CreateDX11Device() as follows:
static const D3D_FEATURE_LEVEL levels[] = {
/*
D3D_FEATURE_LEVEL_11_1,
D3D_FEATURE_LEVEL_11_0,
D3D_FEATURE_LEVEL_10_1,
D3D_FEATURE_LEVEL_10_0,
*/
D3D_FEATURE_LEVEL_9_3,
D3D_FEATURE_LEVEL_9_2,
D3D_FEATURE_LEVEL_9_1
};
Now TransferVideoFrame() still fails, but gives E_FAIL (as an HRESULT) instead of an invalid argument.
More Googling led to my second discovery -
Which was an example showing use of TransferVideoFrame() without using CreateTexture2D() to pre-create the texture. I see you already had some code in OnTimer() similar to this but which was not used, so I guess you'd found the same link.
Anyway, I now used this code to get the video frame:
ComPtr <ID3D11Texture2D> spTextureDst;
m_spDX11SwapChain->GetBuffer (0, IID_PPV_ARGS (&spTextureDst));
m_spMediaEngine->TransferVideoFrame (spTextureDst.Get (), nullptr, &m_rcTarget, &m_bkgColor);
After doing this, I see that TransferVideoFrame() succeeds (good!) but calling Map() on your copied texture - m_spCopyTexture - fails because that texture wasn't created with CPU read access.
So, I just used your read/write m_spRenderTexture as the target of the copy instead because that has the correct flags and, due to the previous change, I was no longer using it.
//copy the render target texture to the readable texture.
m_spDX11DeviceContext->CopySubresourceRegion(m_spRenderTexture.Get(),0,0,0,0,spTextureDst.Get(),0,NULL);
m_spDX11DeviceContext->Flush();
//Map the readable texture;
D3D11_MAPPED_SUBRESOURCE mapped = {0};
HRESULT hr = m_spDX11DeviceContext->Map(m_spRenderTexture.Get(),0,D3D11_MAP_READ,0,&mapped);
void* buffer = ::CoTaskMemAlloc(176 * 144 * 3);
memcpy(buffer, mapped.pData,176 * 144 * 3);
//unmap so we can copy during next update.
m_spDX11DeviceContext->Unmap(m_spRenderTexture.Get(),0);
Now, on my system, the OnTimer() function does not fail. Video frames are rendered to the texture and the pixel data is copied out successfully to the memory buffer.
Before looking to see if there are further problems, maybe this is a good time to see if you can make the same progress as I have so far. If you comment on this answer with more info, I will edit the answer to add any more help if possible.
EDIT
Changes made to texture description in FramePlayer::CreateBackBuffers()
//make first texture cpu readable
D3D11_TEXTURE2D_DESC texDesc = {0};
texDesc.Width = 176;
texDesc.Height = 144;
texDesc.MipLevels = 1;
texDesc.ArraySize = 1;
texDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
texDesc.SampleDesc.Count = 1;
texDesc.SampleDesc.Quality = 0;
texDesc.Usage = D3D11_USAGE_STAGING;
texDesc.BindFlags = 0;
texDesc.CPUAccessFlags = D3D11_CPU_ACCESS_READ | D3D11_CPU_ACCESS_WRITE;
texDesc.MiscFlags = 0;
MEDIA::ThrowIfFailed(m_spDX11Device->CreateTexture2D(&texDesc,NULL,&m_spRenderTexture));
Note also that there's a memory leak that needs to be cleared up sometime (I'm sure you're aware) - the memory allocated in the following line is never freed:
void* buffer = ::CoTaskMemAlloc(176 * 144 * 3); // sizes changed for my test
SUCCESS
I have now succeeded in saving an individual frame, but now without the use of the copy texture.
First, I downloaded the latest version of the DirectXTex Library, which provides DX11 texture helper functions, for example to extract an image from a texture and to save to file. The instructions for adding the DirectXTex library to your solution as an existing project need to be followed carefully, taking note of the changes needed for Windows 8 Store Apps.
Once, the above library is included, referenced and built, add the following #include's to FramePlayer.cpp
#include "..\DirectXTex\DirectXTex.h" // nb - use the relative path you copied to
#include <wincodec.h>
Finally, the central section of code in FramePlayer::OnTimer() needs to be similar to the following. You will see I just save to the same filename each time so this will need amending to add e.g. a frame number to the name
// new frame available at the media engine so get it
ComPtr<ID3D11Texture2D> spTextureDst;
MEDIA::ThrowIfFailed(m_spDX11SwapChain->GetBuffer(0, IID_PPV_ARGS(&spTextureDst)));
auto rcNormalized = MFVideoNormalizedRect();
rcNormalized.left = 0;
rcNormalized.right = 1;
rcNormalized.top = 0;
rcNormalized.bottom = 1;
MEDIA::ThrowIfFailed(m_spMediaEngine->TransferVideoFrame(spTextureDst.Get(), &rcNormalized, &m_rcTarget, &m_bkgColor));
// capture an image from the DX11 texture
DirectX::ScratchImage pImage;
HRESULT hr = DirectX::CaptureTexture(m_spDX11Device.Get(), m_spDX11DeviceContext.Get(), spTextureDst.Get(), pImage);
if (SUCCEEDED(hr))
{
// get the image object from the wrapper
const DirectX::Image *pRealImage = pImage.GetImage(0, 0, 0);
// set some place to save the image frame
StorageFolder ^dataFolder = ApplicationData::Current->LocalFolder;
Platform::String ^szPath = dataFolder->Path + "\\frame.png";
// save the image to file
hr = DirectX::SaveToWICFile(*pRealImage, DirectX::WIC_FLAGS_NONE, GUID_ContainerFormatPng, szPath->Data());
}
// and the present it to the screen
MEDIA::ThrowIfFailed(m_spDX11SwapChain->Present(1, 0));
I don't have time right now to take this any further but I'm very pleased with what I have achieved so far :-))
Can you take a fresh look and update your results in comments?
Look at the Video Thumbnail Sample and the Source Reader documentation.
You can find sample code under SDK Root\Samples\multimedia\mediafoundation\VideoThumbnail
I think OpenCV may help you.
OpenCV offers api to capture frames from camera or video files.
You can download it here http://opencv.org/downloads.html.
The following is a demo I writed with "OpenCV 2.3.1".
#include "opencv.hpp"
using namespace cv;
int main()
{
VideoCapture cap("demo.avi"); // open a video to capture
if (!cap.isOpened()) // check if succeeded
return -1;
Mat frame;
namedWindow("Demo", CV_WINDOW_NORMAL);
// Loop to capture frame and show on the window
while (1) {
cap >> frame;
if (frame.empty())
break;
imshow("Demo", frame);
if (waitKey(33) >= 0) // pause 33ms every frame
break;
}
return 0;
}

Display Different images per monitor directX 10

I am fairly new to DirectX 10 programming, and I have been trying to do the following with my limited skills (though I have a strong background with OpenGL)
I am trying to display 2 different textured Quads, 1 per monitor. To do so, I understood that I need a single D3D10 Device, multiple (2) swap chains, and single VertexBuffer
While I think I'm able to create all of those, I'm still pretty unsure how to handle all of them. Do I need multiple ID3D10RenderTargetView(s) ? How and where should I Use OMSetRenderTargets(...) ?
Other than MSDN, documentation or explaination of those concepts are rather limited, so any help would be very welcome. Here is some code I have :
Here's the rendering code
for(int i = 0; i < screenNumber; i++){
//clear scene
pD3DDevice->ClearRenderTargetView( pRenderTargetView, D3DXCOLOR(0,1,0,0) );
//fill vertex buffer with vertices
UINT numVertices = 4;
vertex* v = NULL;
//lock vertex buffer for CPU use
pVertexBuffer->Map(D3D10_MAP_WRITE_DISCARD, 0, (void**) &v );
v[0] = vertex( D3DXVECTOR3(-1,-1,0), D3DXVECTOR4(1,0,0,1), D3DXVECTOR2(0.0f, 1.0f) );
v[1] = vertex( D3DXVECTOR3(-1,1,0), D3DXVECTOR4(0,1,0,1), D3DXVECTOR2(0.0f, 0.0f) );
v[2] = vertex( D3DXVECTOR3(1,-1,0), D3DXVECTOR4(0,0,1,1), D3DXVECTOR2(1.0f, 1.0f) );
v[3] = vertex( D3DXVECTOR3(1,1,0), D3DXVECTOR4(1,1,0,1), D3DXVECTOR2(1.0f, 0.0f) );
pVertexBuffer->Unmap();
// Set primitive topology
pD3DDevice->IASetPrimitiveTopology( D3D10_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP );
//set texture
pTextureSR->SetResource( textureSRV[textureIndex] );
//get technique desc
D3D10_TECHNIQUE_DESC techDesc;
pBasicTechnique->GetDesc( &techDesc );
// This is where you actually use the shader code
for( UINT p = 0; p < techDesc.Passes; ++p )
{
//apply technique
pBasicTechnique->GetPassByIndex( p )->Apply( 0 );
//draw
pD3DDevice->Draw( numVertices, 0 );
}
//flip buffers
pSwapChain[i]->Present(0,0);
}
And here's the code for creating rendering targets, which I am not sure is good
for(int i = 0; i < screenNumber; ++i){
//try to get the back buffer
ID3D10Texture2D* pBackBuffer;
if ( FAILED( pSwapChain[1]->GetBuffer(0, __uuidof(ID3D10Texture2D), (LPVOID*) &pBackBuffer) ) ) return fatalError("Could not get back buffer");
//try to create render target view
if ( FAILED( pD3DDevice->CreateRenderTargetView(pBackBuffer, NULL, &pRenderTargetView) ) ) return fatalError("Could not create render target view");
pBackBuffer->Release();
pD3DDevice->OMSetRenderTargets(1, &pRenderTargetView, NULL);
}
return true;
}
I hope I got the gist of what you wish to do - render different content on two different monitors while using a single graphics card (graphics adapter) which maps its output to those monitors. For that, you're going to need one device (for the single graphics card/adapter) and enumerate just how many outputs there are at the user's machine.
So, in total - that means one device, two outputs, two windows and therefore - two swap chains.
Here's a quick result of my little experiment:
A little introduction
With DirectX 10+, this falls into the DXGI (DirectX Graphics Infrastructure) which manages the common low-level logistics involved with DirectX 10+ development which, as you probably know, dumped the old requirement of enumerating feature sets and the like - requiring every DX10+ capable card to share in on all of the features defined by the API. The only thing that varies is the extent and capability of the card (in other words, lousy performance is preferable to the app crashing and burning). This was all within DirectX 9 in the past, but people at Microsoft decided to pull it out and call it DXGI. Now, we can use DXGI functionality to set up our multi monitor environment.
Do I need multiple ID3D10RenderTargetView(s) ?
Yes, you do need multiple render target views, count depends (like the swap chains and windows) on the number of monitors you have. But, to save you from spewing words, let's write it out as simple as possible and additional information where it's needed:
Enumerate all adapters available on the system.
For each adapter, enumerate all outputs available (and active) and create a device to accompany it.
With the enumerated data stored in a suitable structure (think arrays which can quickly relinquish size information), use it to create n windows, swap chains, render target views, depth/stencil textures and their respective views where n is equal to the number of outputs.
With everything created, for each window you are rendering into, you can define special routines which will use the available geometry (and other) data to output your results - which resolves to what each monitor gets in fullscreen (don't forget to adjust the viewport for every window accordingly).
Present your data by iterating over every swap chain which is linked to its respective window and swap buffers with Present()
Now, while this is rich in word count, some code is worth a lot more. This is designed to give you a coarse idea of what goes into implementing a simple multimonitor application. So, assumptions are that there is only one adapter ( a rather bold statement nowadays ) and multiple outputs - and no failsafes. I'll leave the fun part to you. Answer to the second question is downstairs...
Do note there's no memory management involved. We assume everything magically gets cleaned up when it is not needed for illustration purposes. Be a good memory citizen.
Getting the adapter
IDXGIAdapter* adapter = NULL;
void GetAdapter() // applicable for multiple ones with little effort
{
// remember, we assume there's only one adapter (example purposes)
for( int i = 0; DXGI_ERROR_NOT_FOUND != factory->EnumAdapters( i, &adapter ); ++i )
{
// get the description of the adapter, assuming no failure
DXGI_ADAPTER_DESC adapterDesc;
HRESULT hr = adapter->GetDesc( &adapterDesc );
// Getting the outputs active on our adapter
EnumOutputsOnAdapter();
}
Acquiring the outputs on our adapter
std::vector<IDXGIOutput*> outputArray; // contains outputs per adapter
void EnumOutputsOnAdapter()
{
IDXGIOutput* output = NULL;
for(int i = 0; DXGI_ERROR_NOT_FOUND != adapter->EnumOutputs(i, &output); ++i)
{
// get the description
DXGI_OUTPUT_DESC outputDesc;
HRESULT hr = output->GetDesc( &outputDesc );
outputArray.push_back( output );
}
}
Now, I must assume that you're at least aware of the Win32 API considerations, creating window classes, registering with the system, creating windows, etc... Therefore, I will not qualify its creation, only elaborate how it pertains to multiple windows. Also, I will only consider the fullscreen case here, but creating it in windowed mode is more than possible and rather trivial.
Creating the actual windows for our outputs
Since we assume existence of just one adapter, we only consider the enumerated outputs linked to that particular adapter. It would be preferable to organize all window data in neat little structures, but for the purposes of this answer, we'll just shove them into a simple struct and then into yet another std::vector object, and by them I mean handles to respective windows (HWND) and their size (although for our case it's constant).
But still, we have to address the fact that we have one swap chain, one render target view, one depth/stencil view per window. So, why not feed all of that in that little struct which describes each of our windows? Makes sense, right?
struct WindowDataContainer
{
//Direct3D 10 stuff per window data
IDXGISwapChain* swapChain;
ID3D10RenderTargetView* renderTargetView;
ID3D10DepthStencilView* depthStencilView;
// window goodies
HWND hWnd;
int width;
int height;
}
Nice. Well, not really. But still... Moving on! Now to create the windows for outputs:
std::vector<WindowDataContainer*> windowsArray;
void CreateWindowsForOutputs()
{
for( int i = 0; i < outputArray.size(); ++i )
{
IDXGIOutput* output = outputArray.at(i);
DXGI_OUTPUT_DESC outputDesc;
p_Output->GetDesc( &outputDesc );
int x = outputDesc.DesktopCoordinates.left;
int y = outputDesc.DesktopCoordinates.top;
int width = outputDesc.DesktopCoordinates.right - x;
int height = outputDesc.DesktopCoordinates.bottom - y;
// Don't forget to clean this up. And all D3D COM objects.
WindowDataContainer* window = new WindowDataContainer;
window->hWnd = CreateWindow( windowClassName,
windowName,
WS_POPUP,
x,
y,
width,
height,
NULL,
0,
instance,
NULL );
// show the window
ShowWindow( window->hWnd, SW_SHOWDEFAULT );
// set width and height
window->width = width;
window->height = height;
// shove it in the std::vector
windowsArray.push_back( window );
//if first window, associate it with DXGI so it can jump in
// when there is something of interest in the message queue
// think fullscreen mode switches etc. MSDN for more info.
if(i == 0)
factory->MakeWindowAssociation( window->hWnd, 0 );
}
}
Cute, now that's done. Since we only have one adapter and therefore only one device to accompany it, create it as usual. In my case, it's simply a global interface pointer which can be accessed all over the place. We are not going for code of the year here, so why the hell not, eh?
Creating the swap chains, views and the depth/stencil 2D texture
Now, our friendly swap chains... You might be used to actually creating them by invoking the "naked" function D3D10CreateDeviceAndSwapChain(...), but as you know, we've already made our device. We only want one. And multiple swap chains. Well, that's a pickle. Luckily, our DXGIFactory interface has swap chains on its production line which we can receive for free with complementary kegs of rum. Onto the swap chains then, create for every window one:
void CreateSwapChainsAndViews()
{
for( int i = 0; i < windowsArray.size(); i++ )
{
WindowDataContainer* window = windowsArray.at(i);
// get the dxgi device
IDXGIDevice* DXGIDevice = NULL;
device->QueryInterface( IID_IDXGIDevice, ( void** )&DXGIDevice ); // COM stuff, hopefully you are familiar
// create a swap chain
DXGI_SWAP_CHAIN_DESC swapChainDesc;
// fill it in
HRESULT hr = factory->CreateSwapChain( DXGIDevice, &swapChainDesc, &p_Window->swapChain );
DXGIDevice->Release();
DXGIDevice = NULL;
// get the backbuffer
ID3D10Texture2D* backBuffer = NULL;
hr = window->swapChain->GetBuffer( 0, IID_ID3D10Texture2D, ( void** )&backBuffer );
// get the backbuffer desc
D3D10_TEXTURE2D_DESC backBufferDesc;
backBuffer->GetDesc( &backBufferDesc );
// create the render target view
D3D10_RENDER_TARGET_VIEW_DESC RTVDesc;
// fill it in
device->CreateRenderTargetView( backBuffer, &RTVDesc, &window->renderTargetView );
backBuffer->Release();
backBuffer = NULL;
// Create depth stencil texture
ID3D10Texture2D* depthStencil = NULL;
D3D10_TEXTURE2D_DESC descDepth;
// fill it in
device->CreateTexture2D( &descDepth, NULL, &depthStencil );
// Create the depth stencil view
D3D10_DEPTH_STENCIL_VIEW_DESC descDSV;
// fill it in
device->CreateDepthStencilView( depthStencil, &descDSV, &window->depthStencilView );
}
}
We now have everything we need. All that you need to do is define a function which iterates over all windows and draws different stuff appropriately.
How and where should I Use OMSetRenderTargets(...) ?
In the just mentioned function which iterates over all windows and uses the appropriate render target (courtesy of our per-window data container):
void MultiRender( )
{
// Clear them all
for( int i = 0; i < windowsArray.size(); i++ )
{
WindowDataContainer* window = windowsArray.at(i);
// There is the answer to your second question:
device->OMSetRenderTargets( 1, &window->renderTargetView, window->depthStencilView );
// Don't forget to adjust the viewport, in fullscreen it's not important...
D3D10_VIEWPORT Viewport;
Viewport.TopLeftX = 0;
Viewport.TopLeftY = 0;
Viewport.Width = window->width;
Viewport.Height = window->height;
Viewport.MinDepth = 0.0f;
Viewport.MaxDepth = 1.0f;
device->RSSetViewports( 1, &Viewport );
// TO DO: AMAZING STUFF PER WINDOW
}
}
Of course, don't forget to run through all the swap chains and swap buffers per window basis. The code here is just for the purposes of this answer, it requires a bit more work, error checking (failsafes) and contemplation to get it working just the way you like it - in other words - it should give you a simplified overview, not a production solution.
Good luck and happy coding! Sheesh, this is huge.