DXGI Screen capture distorted image - c++

Not only that the fps drops form 60 to 20-21 but the image also looks distorted like this. Second image is what it should look like
What it looks like
What it should look like
if (captureVideo == 1) {
pNewTexture = NULL;
// Use the IDXGISwapChain::GetBuffer API to retrieve a swap chain surface ( use the uuid ID3D11Texture2D for the result type ).
pSwapChain->GetBuffer( 0, __uuidof( ID3D11Texture2D ), reinterpret_cast< void** >( &pSurface ) );
/* The swap chain buffers are not mapable, so I need to copy it to a staging resource. */
pSurface->GetDesc( &description ); //Use ID3D11Texture2D::GetDesc to retrieve the surface description
// Patch it with a D3D11_USAGE_STAGING usage and a cpu access flag of D3D11_CPU_ACCESS_READ
description.BindFlags = 0;
description.CPUAccessFlags = D3D11_CPU_ACCESS_READ;
description.Usage = D3D11_USAGE_STAGING;
// Create a temporary surface ID3D11Device::CreateTexture2D
HRESULT hr = pDevice->CreateTexture2D( &description, NULL, &pNewTexture );
if( pNewTexture )
{
// Copy to the staging surface ID3D11DeviceContext::CopyResource
pContext->CopyResource( pNewTexture, pSurface );
// Now I have a ID3D11Texture2D with the content of your swap chain buffer that allow you to use the ID3D11DeviceContext::Map API to read it on the CPU
D3D11_MAPPED_SUBRESOURCE resource;
pContext->Map( pNewTexture, D3D11CalcSubresource( 0, 0, 0), D3D11_MAP_READ, 0, &resource );
const int pitch = w << 2;
const unsigned char* source = static_cast< const unsigned char* >( resource.pData );
unsigned char* dest = static_cast< unsigned char* >(m_lpBits);
for( int i = 0; i < h; ++i )
{
memcpy( dest, source, w * 4 );
source += pitch;
dest += pitch;
}
AppendNewFrame(w, h, m_lpBits,24);
pContext->Unmap( pNewTexture, 0);
pNewTexture->Release();
}
}

The code snippet even though incomplete shows several potential problems:
Number of 24 in AppendNewFrame line suggests that you are trying to treat the data as 24-bit RGB, and your data is 32-bit RGB. Such mistreatment matches the artifacts exhibited on the attached images;
Pitch/stride is taken as assumed default, while you have the effectively used one in D3D11_MAPPED_SUBRESOURCE structure and you should be using it.

Related

D3D11 SwapChain Buffer is NULL

Whenever I run this code, the data that is pointed (member pData) to within the _TextureData struct is all 0 (like 300 bytes of just 0). The HRESULT result that it returns is always S_OK, and the row and column depths are accurate. I am sure that something is being rendered to the buffer because there are things being displayed on the window that I am rendering to. I have tried both getting the buffer's data before and after presenting, and either way, the data is still null.
D3D11_TEXTURE2D_DESC desc { };
ID3D11Texture2D * pCopy = nullptr;
ID3D11Texture2D * pBackBufferTexture = nullptr;
desc.Width = 800;
desc.Height = 800;
desc.MipLevels = 1;
desc.ArraySize = 1;
desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
desc.SampleDesc.Count = 1;
desc.Usage = D3D11_USAGE_STAGING;
desc.BindFlags = 0;
desc.CPUAccessFlags = D3D11_CPU_ACCESS_READ;
desc.MiscFlags = 0;
assert( SUCCEEDED( pSwapChain->Present( 0,
0 ) ) );
pDevice->CreateTexture2D( &desc, nullptr, &pCopy );
pSwapChain->GetBuffer( 0,
__uuidof( ID3D11Texture2D ),
reinterpret_cast< void ** >( &pBackBufferTexture ) );
pContext->CopyResource( pCopy, pBackBufferTexture );
D3D11_MAPPED_SUBRESOURCE _TextureData { };
auto result = pContext->Map( pCopy, 0, D3D11_MAP_READ, 0, &_TextureData );
pContext->Unmap( pCopy, 0 );
pCopy->Release( );
The code for the swapchain holds the answer... The swap-chain was created with 4x MSAA, but the staging texture is single-sample.
You can't CopyResource in this case. Instead you must resolve the MSAA:
pContext->ResolveSubresource(pCopy, 0, pBackBufferTexture, 0, DXGI_FORMAT_R8G8B8A8_UNORM);
See the DirectX Tool Kit ScreenGrab source which handles this case more generally.
The code also shows that you are not using the Debug device (D3D11_CREATE_DEVICE_DEBUG) which would have told you about this problem. See this blog post for details.

DirectX Partial Screen Capture

I am trying to create a program that will capture a full screen directx application, look for a specific set of pixels on the screen and if it finds it then draw an image on the screen.
I have been able to set up the application to capture the screen the directx libraries using the code the answer for this question Capture screen using DirectX
In this example the code saves to the harddrive using the IWIC libraries. I would rather manipulate the pixels instead of saving it.
After I have captured the screen and have a LPBYTE of the entire screen pixels I am unsure how to crop it to the region I want and then being able to manipulate the pixel array. Is it just a multi dimensional byte array?
The way I think I should do it is
Capture screen to IWIC bitmap (done).
Convert IWIC bitmap to ID2D1 bitmap using ID2D1RenderTarget::CreateBitmapFromWicBitmap
Create new ID2D1::Bitmap to store partial image.
Copy region of the ID2D1 bitmap to a new bitmap using ID2D1::CopyFromBitmap.
Render back onto screen using ID2D1 .
Any help on any of this would be so much appreciated.
Here is a modified version of the original code that only captures a portion of the screen into a buffer, and also gives back the stride. Then it browses all the pixels, dumps their colors as a sample usage of the returned buffer.
In this sample, the buffer is allocated by the function, so you must free it once you've used it:
// sample usage
int main()
{
LONG left = 10;
LONG top = 10;
LONG width = 100;
LONG height = 100;
LPBYTE buffer;
UINT stride;
RECT rc = { left, top, left + width, top + height };
Direct3D9TakeScreenshot(D3DADAPTER_DEFAULT, &buffer, &stride, &rc);
// In 32bppPBGRA format, each pixel is represented by 4 bytes
// with one byte each for blue, green, red, and the alpha channel, in that order.
// But don't forget this is all modulo endianness ...
// So, on Intel architecture, if we read a pixel from memory
// as a DWORD, it's reversed (ARGB). The macros below handle that.
// browse every pixel by line
for (int h = 0; h < height; h++)
{
LPDWORD pixels = (LPDWORD)(buffer + h * stride);
for (int w = 0; w < width; w++)
{
DWORD pixel = pixels[w];
wprintf(L"#%02X#%02X#%02X#%02X\n", GetBGRAPixelAlpha(pixel), GetBGRAPixelRed(pixel), GetBGRAPixelGreen(pixel), GetBGRAPixelBlue(pixel));
}
}
// get pixel at 50, 50 in the buffer, as #ARGB
DWORD pixel = GetBGRAPixel(buffer, stride, 50, 50);
wprintf(L"#%02X#%02X#%02X#%02X\n", GetBGRAPixelAlpha(pixel), GetBGRAPixelRed(pixel), GetBGRAPixelGreen(pixel), GetBGRAPixelBlue(pixel));
SavePixelsToFile32bppPBGRA(width, height, stride, buffer, L"test.png", GUID_ContainerFormatPng);
LocalFree(buffer);
return 0;;
}
#define GetBGRAPixelBlue(p) (LOBYTE(p))
#define GetBGRAPixelGreen(p) (HIBYTE(p))
#define GetBGRAPixelRed(p) (LOBYTE(HIWORD(p)))
#define GetBGRAPixelAlpha(p) (HIBYTE(HIWORD(p)))
#define GetBGRAPixel(b,s,x,y) (((LPDWORD)(((LPBYTE)b) + y * s))[x])
int main()
HRESULT Direct3D9TakeScreenshot(UINT adapter, LPBYTE *pBuffer, UINT *pStride, const RECT *pInputRc = nullptr)
{
if (!pBuffer || !pStride) return E_INVALIDARG;
HRESULT hr = S_OK;
IDirect3D9 *d3d = nullptr;
IDirect3DDevice9 *device = nullptr;
IDirect3DSurface9 *surface = nullptr;
D3DPRESENT_PARAMETERS parameters = { 0 };
D3DDISPLAYMODE mode;
D3DLOCKED_RECT rc;
*pBuffer = NULL;
*pStride = 0;
// init D3D and get screen size
d3d = Direct3DCreate9(D3D_SDK_VERSION);
HRCHECK(d3d->GetAdapterDisplayMode(adapter, &mode));
LONG width = pInputRc ? (pInputRc->right - pInputRc->left) : mode.Width;
LONG height = pInputRc ? (pInputRc->bottom - pInputRc->top) : mode.Height;
parameters.Windowed = TRUE;
parameters.BackBufferCount = 1;
parameters.BackBufferHeight = height;
parameters.BackBufferWidth = width;
parameters.SwapEffect = D3DSWAPEFFECT_DISCARD;
parameters.hDeviceWindow = NULL;
// create device & capture surface (note it needs desktop size, not our capture size)
HRCHECK(d3d->CreateDevice(adapter, D3DDEVTYPE_HAL, NULL, D3DCREATE_SOFTWARE_VERTEXPROCESSING, &parameters, &device));
HRCHECK(device->CreateOffscreenPlainSurface(mode.Width, mode.Height, D3DFMT_A8R8G8B8, D3DPOOL_SYSTEMMEM, &surface, nullptr));
// get pitch/stride to compute the required buffer size
HRCHECK(surface->LockRect(&rc, pInputRc, 0));
*pStride = rc.Pitch;
HRCHECK(surface->UnlockRect());
// allocate buffer
*pBuffer = (LPBYTE)LocalAlloc(0, *pStride * height);
if (!*pBuffer)
{
hr = E_OUTOFMEMORY;
goto cleanup;
}
// get the data
HRCHECK(device->GetFrontBufferData(0, surface));
// copy it into our buffer
HRCHECK(surface->LockRect(&rc, pInputRc, 0));
CopyMemory(*pBuffer, rc.pBits, rc.Pitch * height);
HRCHECK(surface->UnlockRect());
cleanup:
if (FAILED(hr))
{
if (*pBuffer)
{
LocalFree(*pBuffer);
*pBuffer = NULL;
}
*pStride = 0;
}
RELEASE(surface);
RELEASE(device);
RELEASE(d3d);
return hr;
}

How to access pixels data from ID3D11Texture2D?

I'm using Windows Desktop Duplication API to make my own mirroring protocol.
I have this piece of code :
// Get new frame
HRESULT hr = m_DeskDupl->AcquireNextFrame(500, &FrameInfo, &DesktopResource);
if (hr == DXGI_ERROR_WAIT_TIMEOUT)
{
*Timeout = true;
return DUPL_RETURN_SUCCESS;
}
Here is the FrameInfo structure :
`typedef struct _FRAME_DATA {
ID3D11Texture2D* Frame;
DXGI_OUTDUPL_FRAME_INFO FrameInfo;
_Field_size_bytes_((MoveCount * sizeof(DXGI_OUTDUPL_MOVE_RECT)) + (DirtyCount * sizeof(RECT))) BYTE* MetaData;
UINT DirtyCount;
UINT MoveCount;
} FRAME_DATA;`
I would like to extract the pixel buffer from ID3D11Texture2D* Frame;
How can I extract on a BYTE * or unsigned char * and have a RGB sequence ?
Thank you !
You need to create a second texture of the same size with CPU read access using ID3D11Device::CreateTexture2D, copy whole frame or just updated parts to this texture on GPU using ID3D11DeviceContext::CopyResource or ID3D11DeviceContext::CopySubresourceRegion (it is possible to retrieve which parts were updated using IDXGIOutputDuplication::GetFrameDirtyRects and IDXGIOutputDuplication::GetFrameMoveRects), map second texture to make it accessible by CPU using ID3D11DeviceContext::Map which gives you D3D11_MAPPED_SUBRESOURCE struct containing pointer to buffer with frame data and it's size, that is what you are looking for.
Microsoft provides a rather detailed Desktop Duplication API usage sample implementing all the steps mentioned above.
There is also a straight sample demonstrating how to save ID3D11Texture2D data to file.
Hi here is the code which helps for your requirement. output will be in UCHAR buffer g_iMageBuffer
//Variable Declaration
IDXGIOutputDuplication* IDeskDupl;
IDXGIResource* lDesktopResource = nullptr;
DXGI_OUTDUPL_FRAME_INFO IFrameInfo;
ID3D11Texture2D* IAcquiredDesktopImage;
ID3D11Texture2D* lDestImage;
ID3D11DeviceContext* lImmediateContext;
UCHAR* g_iMageBuffer=nullptr;
//Screen capture start here
hr = lDeskDupl->AcquireNextFrame(20, &lFrameInfo, &lDesktopResource);
// >QueryInterface for ID3D11Texture2D
hr = lDesktopResource->QueryInterface(IID_PPV_ARGS(&lAcquiredDesktopImage));
lDesktopResource.Release();
// Copy image into GDI drawing texture
lImmediateContext->CopyResource(lDestImage,lAcquiredDesktopImage);
lAcquiredDesktopImage.Release();
lDeskDupl->ReleaseFrame();
// Copy GPU Resource to CPU
D3D11_TEXTURE2D_DESC desc;
lDestImage->GetDesc(&desc);
D3D11_MAPPED_SUBRESOURCE resource;
UINT subresource = D3D11CalcSubresource(0, 0, 0);
lImmediateContext->Map(lDestImage, subresource, D3D11_MAP_READ_WRITE, 0, &resource);
std::unique_ptr<BYTE> pBuf(new BYTE[resource.RowPitch*desc.Height]);
UINT lBmpRowPitch = lOutputDuplDesc.ModeDesc.Width * 4;
BYTE* sptr = reinterpret_cast<BYTE*>(resource.pData);
BYTE* dptr = pBuf.get() + resource.RowPitch*desc.Height - lBmpRowPitch;
UINT lRowPitch = std::min<UINT>(lBmpRowPitch, resource.RowPitch);
for (size_t h = 0; h < lOutputDuplDesc.ModeDesc.Height; ++h)
{
memcpy_s(dptr, lBmpRowPitch, sptr, lRowPitch);
sptr += resource.RowPitch;
dptr -= lBmpRowPitch;
}
lImmediateContext->Unmap(lDestImage, subresource);
long g_captureSize=lRowPitch*desc.Height;
g_iMageBuffer= new UCHAR[g_captureSize];
g_iMageBuffer = (UCHAR*)malloc(g_captureSize);
//Copying to UCHAR buffer
memcpy(g_iMageBuffer,pBuf,g_captureSize);

Direct2D - Emulating Color Keyed Transparent Bitmaps

I'm currently updating a Windows GDI application to use Direct2D rendering and I need to support "transparent" bitmaps via color-keying for backwards compatibility.
Right now I'm working with a HWND render target and a converted WIC bitmap source (to GUID_WICPixelFormat32bppPBGRA). My plan so far is to create a IWICBitmap from the converted bitmap, Lock() it, and then process each pixel setting it's alpha value to 0 if it matches the color key.
This seems a bit "brute force" - Is this the best method of approaching this or is there a better way?
Edit: In the interests of completeness here's an extract of what I went with - looks like it's working fine but I'm open to any improvements!
// pConvertedBmp contains a IWICFormatConverter* bitmap with the pixel
// format set to GUID_WICPixelFormat32bppPBGRA
IWICBitmap* pColorKeyedBmp = NULL;
HRESULT hr = S_OK;
UINT uBmpW = 0;
UINT uBmpH = 0;
pConvertedBmp->GetSize( &uBmpW, &uBmpH );
WICRect rcLock = { 0, 0, uBmpW, uBmpH };
// GetWIC() returns the WIC Factory instance in this app
hr = GetWIC()->CreateBitmapFromSource( pConvertedBmp,
WICBitmapCacheOnLoad,
&pColorKeyedBmp );
if ( FAILED( hr ) ) {
return hr;
}
IWICBitmapLock* pBitmapLock = NULL;
hr = pColorKeyedBmp->Lock( &rcLock, WICBitmapLockRead, &pBitmapLock );
if ( FAILED( hr ) ) {
SafeRelease( &pColorKeyedBmp );
return hr;
}
UINT uPixel = 0;
UINT cbBuffer = 0;
UINT cbStride = 0;
BYTE* pPixelBuffer = NULL;
hr = pBitmapLock->GetStride( &cbStride );
if ( SUCCEEDED( hr ) ) {
hr = pBitmapLock->GetDataPointer( &cbBuffer, &pPixelBuffer );
if ( SUCCEEDED( hr ) ) {
// If we haven't got a resolved color key then we need to
// grab the pixel at the specified coordinates and get
// it's RGB
if ( !clrColorKey.IsValidColor() ) {
// This is an internal function to grab the color of a pixel
ResolveColorKey( pPixelBuffer, cbBuffer, cbStride, uBmpW, uBmpH );
}
// Convert the RGB to BGR
UINT uColorKey = (UINT) RGB2BGR( clrColorKey.GetRGB() );
LPBYTE pPixel = pPixelBuffer;
for ( UINT uRow = 0; uRow < uBmpH; uRow++ ) {
pPixel = pPixelBuffer + ( uRow * cbStride );
for ( UINT uCol = 0; uCol < uBmpW; uCol++ ) {
uPixel = *( (LPUINT) pPixel );
if ( ( uPixel & 0x00FFFFFF ) == uColorKey ) {
*( (LPUINT) pPixel ) = 0;
}
pPixel += sizeof( uPixel );
}
}
}
}
pBitmapLock->Release();
if ( FAILED( hr ) ) {
// We still use the original image
SafeRelease( &pColorKeyedBmp );
}
else {
// We use the new image so we release the original
SafeRelease( &pConvertedBmp );
}
return hr;
If you only need to "process" the bitmap to render it, then the fastest is always GPU. In Direct2D there are effects (ID2D1Effect) for simple bitmap processing. You can write your own [it seems comparatively complicated], or use one of the built-in effects [which is rather simple]. There is one called chroma-key (CLSID_D2D1ChromaKey).
On the other hand, if you need to do further processing on CPU, it gets more complex. You might be better off optimizing the code you have.

Display a image in a MFC/C++ application using OpenCV

I would like to display in a MFC application, frames that I capture from an avi file with OpenCV (cvCaptureFromAVI function).
I'm new to MFC but feel like I'm close to making it work. But instead of the frames being displayed in the picture box they are displayed in a new window.
cvGetWindowName returns always a null value.
There is my code:
CWnd* hPic = 0;
hPic = GetDlgItem(IDC_STATICPIC1);
const char* szWindName = cvGetWindowName(hPic->GetSafeHwnd());
cvShowImage(szWindName, frame_copy);
So I found something to make it work after long researches.
The solution is to create the window and then insert it inside the picture box. I'm not sure it's good practice but I haven't found anything better for now.
cvNamedWindow("IDC_STATIC_OUTPUT", 0);
cvResizeWindow("IDC_STATIC_OUTPUT", 420, 240);
HWND hWnd = (HWND) cvGetWindowHandle("IDC_STATIC_OUTPUT");
HWND hParent = ::GetParent(hWnd);
::SetParent(hWnd, GetDlgItem(IDC_PIC1)->m_hWnd);
::ShowWindow(hParent, SW_HIDE);
cvShowImage("IDC_STATIC_OUTPUT", frame_copy);
In this case the picture box is called IDC_PIC1 and frame_copy is a OpenCV IplImage.
Hope this helps somebody.
Using the following code you can convert Mat to CImage and then display CImage everywhere you want:
int Mat2CImage(Mat *mat, CImage &img){
if(!mat || mat->empty())
return -1;
int nBPP = mat->channels()*8;
img.Create(mat->cols, mat->rows, nBPP);
if(nBPP == 8)
{
static RGBQUAD pRGB[256];
for (int i = 0; i < 256; i++)
pRGB[i].rgbBlue = pRGB[i].rgbGreen = pRGB[i].rgbRed = i;
img.SetColorTable(0, 256, pRGB);
}
uchar* psrc = mat->data;
uchar* pdst = (uchar*) img.GetBits();
int imgPitch = img.GetPitch();
for(int y = 0; y < mat->rows; y++)
{
memcpy(pdst, psrc, mat->cols*mat->channels());//mat->step is incorrect for those images created by roi (sub-images!)
psrc += mat->step;
pdst += imgPitch;
}
return 0;
}
NOTE: If you use the StretchDIBits() method with the BITMAPINFO approach, you MUST be aware that StretchDIBits() expects the raw OpenCV Mat::data pointer to have row lengths in even multiples of 4 bytes! If not, you'll get freaky shearing when you try to copy the data to the DC via StretchDIBits() - where the image is not only sheered along an angle, but the colors are all be trashed as well.
Here is my completely working edition of the code, which also supports maintaining image aspect ratio in the target control's rectangle. It can probably be made a bit faster, but it works for now:
void AdjustAspectImageSize( const Size& imageSize,
const Size& destSize,
Size& newSize )
{
double destAspectRatio = float( destSize.width ) / float( destSize.height );
double imageAspectRatio = float( imageSize.width ) / float( imageSize.height );
if ( imageAspectRatio > destAspectRatio )
{
// Margins on top/bottom
newSize.width = destSize.width;
newSize.height = int( imageSize.height *
( double( destSize.width ) / double( imageSize.width ) ) );
}
else
{
// Margins on left/right
newSize.height = destSize.height;
newSize.width = int( imageSize.width *
( double( destSize.height ) / double( imageSize.height ) ) );
}
}
void DrawPicToHDC( Mat cvImg,
UINT nDlgID,
bool bMaintainAspectRatio /* =true*/ )
{
// Get the HDC handle information from the ID passed
CDC* pDC = GetDlgItem(nDlgID)->GetDC();
HDC hDC = pDC->GetSafeHdc();
CRect rect;
GetDlgItem(nDlgID)->GetClientRect(rect);
Size winSize( rect.right, rect.bottom );
// Calculate the size of the image that
// will fit in the control rectangle.
Size origImageSize( cvImg.cols, cvImg.rows );
Size imageSize;
int offsetX;
int offsetY;
if ( ! bMaintainAspectRatio )
{
// Image should be the same size as the control's rectangle
imageSize = winSize;
}
else
{
Size newSize;
_AdjustAspectImageSize( origImageSize,
winSize,
imageSize );
}
offsetX = ( winSize.width - imageSize.width ) / 2;
offsetY = ( winSize.height - imageSize.height ) / 2;
// Resize the source to the size of the destination image if necessary
Mat cvImgTmp;
resize( cvImg,
cvImgTmp,
imageSize,
0,
0,
INTER_AREA );
// To handle our Mat object of this width, the source rows must
// be even multiples of a DWORD in length to be compatible with
// SetDIBits(). Calculate what the correct byte width of the
// row should be to be compatible with SetDIBits() below.
int stride = ( ( ( ( imageSize.width * 24 ) + 31 ) & ~31 ) >> 3 );
// Allocate a buffer for our DIB bits
uchar* pcDibBits = (uchar*) malloc( imageSize.height * stride );
if ( pcDibBits != NULL )
{
// Copy the raw pixel data over to our dibBits buffer.
// NOTE: Can setup cvImgTmp to add the padding to skip this.
for ( int row = 0; row < cvImgTmp.rows; ++row )
{
// Get pointers to the beginning of the row on both buffers
uchar* pcSrcPixel = cvImgTmp.ptr<uchar>(row);
uchar* pcDstPixel = pcDibBits + ( row * stride );
// We can just use memcpy
memcpy( pcDstPixel,
pcSrcPixel,
stride );
}
// Initialize the BITMAPINFO structure
BITMAPINFO bitInfo;
bitInfo.bmiHeader.biBitCount = 24;
bitInfo.bmiHeader.biWidth = cvImgTmp.cols;
bitInfo.bmiHeader.biHeight = -cvImgTmp.rows;
bitInfo.bmiHeader.biPlanes = 1;
bitInfo.bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
bitInfo.bmiHeader.biCompression = BI_RGB;
bitInfo.bmiHeader.biClrImportant = 0;
bitInfo.bmiHeader.biClrUsed = 0;
bitInfo.bmiHeader.biSizeImage = 0; //winSize.height * winSize.width * * 3;
bitInfo.bmiHeader.biXPelsPerMeter = 0;
bitInfo.bmiHeader.biYPelsPerMeter = 0;
// Add header and OPENCV image's data to the HDC
StretchDIBits( hDC,
offsetX,
offsetY,
cvImgTmp.cols,
cvImgTmp.rows,
0,
0,
cvImgTmp.cols,
cvImgTmp.rows,
pcDibBits,
& bitInfo,
DIB_RGB_COLORS,
SRCCOPY );
free(pcDibBits);
}
ReleaseDC(pDC);
}
int DrawImageToHDC(IplImage* img, HDC hdc, int xDest, int yDest, UINT iUsage, DWORD rop)
char m_chBmpBuf[2048];
BITMAPINFO *m_pBmpInfo = 0;
m_pBmpInfo = (BITMAPINFO*)m_chBmpBuf;
m_pBmpInfo->bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
m_pBmpInfo->bmiHeader.biWidth = img->width;
m_pBmpInfo->bmiHeader.biHeight = -img->height;
m_pBmpInfo->bmiHeader.biBitCount = 24;
m_pBmpInfo->bmiHeader.biPlanes = 1;
m_pBmpInfo->bmiHeader.biCompression = BI_RGB;
m_pBmpInfo->bmiHeader.biSizeImage = 0;
m_pBmpInfo->bmiHeader.biXPelsPerMeter = 0;
m_pBmpInfo->bmiHeader.biYPelsPerMeter = 0;
m_pBmpInfo->bmiHeader.biClrUsed = 0;
m_pBmpInfo->bmiHeader.biClrImportant = 0;
return StretchDIBits(hdc, xDest, yDest, img->width, img->height, 0, 0,
img->width, img->height, img->imageData, m_pBmpInfo, DIB_RGB_COLORS, SRCCOPY);
Usage: DrawImageToHDC(img, pDC->m_hDC, Area.left, Area.top, DIB_RGB_COLORS, SRCCOPY);