I'm currently updating a Windows GDI application to use Direct2D rendering and I need to support "transparent" bitmaps via color-keying for backwards compatibility.
Right now I'm working with a HWND render target and a converted WIC bitmap source (to GUID_WICPixelFormat32bppPBGRA). My plan so far is to create a IWICBitmap from the converted bitmap, Lock() it, and then process each pixel setting it's alpha value to 0 if it matches the color key.
This seems a bit "brute force" - Is this the best method of approaching this or is there a better way?
Edit: In the interests of completeness here's an extract of what I went with - looks like it's working fine but I'm open to any improvements!
// pConvertedBmp contains a IWICFormatConverter* bitmap with the pixel
// format set to GUID_WICPixelFormat32bppPBGRA
IWICBitmap* pColorKeyedBmp = NULL;
HRESULT hr = S_OK;
UINT uBmpW = 0;
UINT uBmpH = 0;
pConvertedBmp->GetSize( &uBmpW, &uBmpH );
WICRect rcLock = { 0, 0, uBmpW, uBmpH };
// GetWIC() returns the WIC Factory instance in this app
hr = GetWIC()->CreateBitmapFromSource( pConvertedBmp,
WICBitmapCacheOnLoad,
&pColorKeyedBmp );
if ( FAILED( hr ) ) {
return hr;
}
IWICBitmapLock* pBitmapLock = NULL;
hr = pColorKeyedBmp->Lock( &rcLock, WICBitmapLockRead, &pBitmapLock );
if ( FAILED( hr ) ) {
SafeRelease( &pColorKeyedBmp );
return hr;
}
UINT uPixel = 0;
UINT cbBuffer = 0;
UINT cbStride = 0;
BYTE* pPixelBuffer = NULL;
hr = pBitmapLock->GetStride( &cbStride );
if ( SUCCEEDED( hr ) ) {
hr = pBitmapLock->GetDataPointer( &cbBuffer, &pPixelBuffer );
if ( SUCCEEDED( hr ) ) {
// If we haven't got a resolved color key then we need to
// grab the pixel at the specified coordinates and get
// it's RGB
if ( !clrColorKey.IsValidColor() ) {
// This is an internal function to grab the color of a pixel
ResolveColorKey( pPixelBuffer, cbBuffer, cbStride, uBmpW, uBmpH );
}
// Convert the RGB to BGR
UINT uColorKey = (UINT) RGB2BGR( clrColorKey.GetRGB() );
LPBYTE pPixel = pPixelBuffer;
for ( UINT uRow = 0; uRow < uBmpH; uRow++ ) {
pPixel = pPixelBuffer + ( uRow * cbStride );
for ( UINT uCol = 0; uCol < uBmpW; uCol++ ) {
uPixel = *( (LPUINT) pPixel );
if ( ( uPixel & 0x00FFFFFF ) == uColorKey ) {
*( (LPUINT) pPixel ) = 0;
}
pPixel += sizeof( uPixel );
}
}
}
}
pBitmapLock->Release();
if ( FAILED( hr ) ) {
// We still use the original image
SafeRelease( &pColorKeyedBmp );
}
else {
// We use the new image so we release the original
SafeRelease( &pConvertedBmp );
}
return hr;
If you only need to "process" the bitmap to render it, then the fastest is always GPU. In Direct2D there are effects (ID2D1Effect) for simple bitmap processing. You can write your own [it seems comparatively complicated], or use one of the built-in effects [which is rather simple]. There is one called chroma-key (CLSID_D2D1ChromaKey).
On the other hand, if you need to do further processing on CPU, it gets more complex. You might be better off optimizing the code you have.
Related
Whenever I run this code, the data that is pointed (member pData) to within the _TextureData struct is all 0 (like 300 bytes of just 0). The HRESULT result that it returns is always S_OK, and the row and column depths are accurate. I am sure that something is being rendered to the buffer because there are things being displayed on the window that I am rendering to. I have tried both getting the buffer's data before and after presenting, and either way, the data is still null.
D3D11_TEXTURE2D_DESC desc { };
ID3D11Texture2D * pCopy = nullptr;
ID3D11Texture2D * pBackBufferTexture = nullptr;
desc.Width = 800;
desc.Height = 800;
desc.MipLevels = 1;
desc.ArraySize = 1;
desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
desc.SampleDesc.Count = 1;
desc.Usage = D3D11_USAGE_STAGING;
desc.BindFlags = 0;
desc.CPUAccessFlags = D3D11_CPU_ACCESS_READ;
desc.MiscFlags = 0;
assert( SUCCEEDED( pSwapChain->Present( 0,
0 ) ) );
pDevice->CreateTexture2D( &desc, nullptr, &pCopy );
pSwapChain->GetBuffer( 0,
__uuidof( ID3D11Texture2D ),
reinterpret_cast< void ** >( &pBackBufferTexture ) );
pContext->CopyResource( pCopy, pBackBufferTexture );
D3D11_MAPPED_SUBRESOURCE _TextureData { };
auto result = pContext->Map( pCopy, 0, D3D11_MAP_READ, 0, &_TextureData );
pContext->Unmap( pCopy, 0 );
pCopy->Release( );
The code for the swapchain holds the answer... The swap-chain was created with 4x MSAA, but the staging texture is single-sample.
You can't CopyResource in this case. Instead you must resolve the MSAA:
pContext->ResolveSubresource(pCopy, 0, pBackBufferTexture, 0, DXGI_FORMAT_R8G8B8A8_UNORM);
See the DirectX Tool Kit ScreenGrab source which handles this case more generally.
The code also shows that you are not using the Debug device (D3D11_CREATE_DEVICE_DEBUG) which would have told you about this problem. See this blog post for details.
Not only that the fps drops form 60 to 20-21 but the image also looks distorted like this. Second image is what it should look like
What it looks like
What it should look like
if (captureVideo == 1) {
pNewTexture = NULL;
// Use the IDXGISwapChain::GetBuffer API to retrieve a swap chain surface ( use the uuid ID3D11Texture2D for the result type ).
pSwapChain->GetBuffer( 0, __uuidof( ID3D11Texture2D ), reinterpret_cast< void** >( &pSurface ) );
/* The swap chain buffers are not mapable, so I need to copy it to a staging resource. */
pSurface->GetDesc( &description ); //Use ID3D11Texture2D::GetDesc to retrieve the surface description
// Patch it with a D3D11_USAGE_STAGING usage and a cpu access flag of D3D11_CPU_ACCESS_READ
description.BindFlags = 0;
description.CPUAccessFlags = D3D11_CPU_ACCESS_READ;
description.Usage = D3D11_USAGE_STAGING;
// Create a temporary surface ID3D11Device::CreateTexture2D
HRESULT hr = pDevice->CreateTexture2D( &description, NULL, &pNewTexture );
if( pNewTexture )
{
// Copy to the staging surface ID3D11DeviceContext::CopyResource
pContext->CopyResource( pNewTexture, pSurface );
// Now I have a ID3D11Texture2D with the content of your swap chain buffer that allow you to use the ID3D11DeviceContext::Map API to read it on the CPU
D3D11_MAPPED_SUBRESOURCE resource;
pContext->Map( pNewTexture, D3D11CalcSubresource( 0, 0, 0), D3D11_MAP_READ, 0, &resource );
const int pitch = w << 2;
const unsigned char* source = static_cast< const unsigned char* >( resource.pData );
unsigned char* dest = static_cast< unsigned char* >(m_lpBits);
for( int i = 0; i < h; ++i )
{
memcpy( dest, source, w * 4 );
source += pitch;
dest += pitch;
}
AppendNewFrame(w, h, m_lpBits,24);
pContext->Unmap( pNewTexture, 0);
pNewTexture->Release();
}
}
The code snippet even though incomplete shows several potential problems:
Number of 24 in AppendNewFrame line suggests that you are trying to treat the data as 24-bit RGB, and your data is 32-bit RGB. Such mistreatment matches the artifacts exhibited on the attached images;
Pitch/stride is taken as assumed default, while you have the effectively used one in D3D11_MAPPED_SUBRESOURCE structure and you should be using it.
I need to capture a winlogon screen in WinXP/Win7/10.
For WinXP I'm using a mirror driver and a standard methodic like this:
...
extern "C" __declspec(dllexport) void SetActiveDesktop() {
if ( currentDesk != NULL )
CloseDesktop( currentDesk );
currentDesk = OpenInputDesktop( 0, FALSE, GENERIC_ALL );
BOOL ret = SetThreadDesktop( currentDesk );
int LASTeRR = GetLastError();
}
extern "C" __declspec(dllexport) HBITMAP CaptureAnImage(
int width,
int height,
int bitsPerPixel )
{
HBITMAP hbmScreen;
LPTSTR bih = NULL;
HDC hdcMemDC = NULL;
int colorDepth = GetCurrentColorDepth();
if ( bitsPerPixel > colorDepth && colorDepth > 0 )
bitsPerPixel = colorDepth;
// Checks a current HDC
if ( currHdc == NULL ) {
SetActiveDesktop();
currHdc = GetDcMirror();
}
if ( prevHdc != currHdc ) {
prevHdc = currHdc;
}
// Check an application instance handler
if ( appInstance == NULL )
appInstance = GetModuleHandle(NULL);
// Creates a compatible DC which is used in a BitBlt from the window DC
hdcMemDC = CreateCompatibleDC( currHdc );
if( hdcMemDC == NULL )
{
return NULL;
}
// Defines bitmap parameters
bi.bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
bi.bmiHeader.biWidth = width;
bi.bmiHeader.biHeight = height;
bi.bmiHeader.biPlanes = 1;
bi.bmiHeader.biBitCount = bitsPerPixel;
// Creates a bitmap with defined parameters
hbmScreen = CreateDIBSection( hdcMemDC, &bi, DIB_RGB_COLORS, (VOID**)&lpBitmapBits, NULL, 0 );
if ( hbmScreen == NULL ) {
return NULL;
}
// Select the compatible bitmap into the compatible memory DC.
SelectObject(hdcMemDC,hbmScreen);
// Bit block transfer into our compatible memory DC.
if(!BitBlt(hdcMemDC,
0,0,
width, height,
currHdc,
0,0,
SRCCOPY))
{
SetActiveDesktop();
currHdc = GetDC(NULL);//GetDcMirror();
hdcMemDC = CreateCompatibleDC( currHdc );
// Creates a bitmap with defined parameters
hbmScreen = CreateDIBSection( hdcMemDC, &bi, DIB_RGB_COLORS, (VOID**)&lpBitmapBits, NULL, 0 );
if(!BitBlt(hdcMemDC,
0,0,
width, height,
currHdc,
0,0,
SRCCOPY ))
{
DeleteDC( hdcMemDC );
return hbmScreen;
}
}
if (DeleteDC( hdcMemDC ) == FALSE ) {
return NULL;
}
return hbmScreen;
}
And luckily it works on WinXP. But in case of win7/win10 I've a quite different situation:
SetThreadDesktop function after switching to winlogon always returns FALSE with an error 5 ( access denied )
I tried to change a strategy:
First of all program creates a list of all of existing window stations and their desktops.
After that program "polls" all of WINSTAs and HDESKs and saves screenshots on a disk.
I tried to launch this program under 3 modes:
as an Administrator
as a service with enabled desktop interaction.
at winlogon desktop using CreateProcess flags ( in this case program simply crashes )
And result was the same.
What I'm doing wrong? Should I try a Desktop Duplication API?
Thanks in advance for your responses!
As Winlogon is a secure desktop you have to run your application under LOCAL_SYSTEM account to access it.
Example: a windows service that runs under LOCAL_SYSTEM that launches an user application (that captures the screen) in a console session.
In your code there is no check for the return value of OpenInputDesktop which may be NULL with error code 5 (access denied).
Check this answer as well for more information
I have a bizarre issue here. I'm displaying a semi-transparent splash screen (a .png file) using layered windows. It works on some machines but not others. On the machines where it doesn't work, the GetLastError returns 317 (which isn't very helpful). Has anyone experienced this before? Here are my relevant functions. The incoming parameters for CreateAsPNG are WS_VISIBLE|WS_POPUP (dwStyle), 0 (dwExStyle), and the handle to a hidden tool window for the parent so no taskbar entry is created. I've verified that I can load an embedded PNG resource into CImage and that the size of the image is correct.
Thanks in advance for any help!
BOOL MySplashWnd::CreateAsPNG( DWORD dwStyle, DWORD dwExStyle, const CString& sTitle, HWND hWndParent )
{
ATL::CImage img;
CreateStreamOnResource( m_nBitmapID, img );
m_nWndWidth = img.GetWidth();
m_nWndHeight = img.GetHeight();
int nTop = 0;
int nLeft = 0;
GetTopLeft( nTop, nLeft );
dwExStyle |= WS_EX_LAYERED;
// Create the Splash Window
BOOL bRetVal = CWnd::CreateEx( dwExStyle, AfxRegisterWndClass( CS_CLASSDC ), sTitle,
dwStyle, nLeft, nTop, m_nWndWidth, m_nWndHeight, hWndParent, NULL );
//Couldn't create the window for some unknown reason...
X_ASSERT( bRetVal != FALSE );
if ( bRetVal )
{
HDC hScreenDC = ::GetDC( m_hWnd );
HDC hDC = ::CreateCompatibleDC( hScreenDC );
HBITMAP hBmp = ::CreateCompatibleBitmap( hScreenDC, m_nWndWidth, m_nWndHeight );
HBITMAP hBmpOld = ( HBITMAP ) ::SelectObject( hDC, hBmp );
img.Draw( hDC, 0, 0, m_nWndWidth, m_nWndHeight, 0, 0, m_nWndWidth, m_nWndHeight );
BLENDFUNCTION blend = { 0 };
blend.BlendOp = AC_SRC_OVER;
blend.BlendFlags = 0;
blend.SourceConstantAlpha = 255;
blend.AlphaFormat = AC_SRC_ALPHA;
POINT ptPos = { nLeft, nTop };
SIZE sizeWnd = { m_nWndWidth, m_nWndHeight };
POINT ptSource = { 0, 0 };
if ( ::UpdateLayeredWindow( m_hWnd, hScreenDC, &ptPos, &sizeWnd, hDC, &ptSource, 0, &blend, ULW_ALPHA ) )
{
}
else
{
// The last error value is 317 on some Win7 machines.
TRACE( _T( "*** Last error: %d\n" ), ::GetLastError() );
}
::SelectObject( hDC, hBmpOld );
::DeleteObject( hBmp );
::DeleteDC( hDC );
::ReleaseDC( NULL, hScreenDC );
}
return bRetVal;
}
void MySplashWnd::CreateStreamOnResource( UINT nIDRes, ATL::CImage& img )
{
HINSTANCE hInstance = ::GetMUIResourceInstance();
if ( hInstance == NULL )
{
return;
}
HRSRC hResource = ::FindResource( hInstance, MAKEINTRESOURCE( nIDRes ), "PNG" );
if ( hResource == NULL )
{
return;
}
DWORD dwResourceSize = ::SizeofResource( hInstance, hResource );
if ( dwResourceSize == 0 )
{
return;
}
HGLOBAL hImage = ::LoadResource( hInstance, hResource );
if ( hImage == NULL )
{
return;
}
LPVOID pvImageResourceData = ::LockResource( hImage );
if ( pvImageResourceData == nullptr )
{
return;
}
HGLOBAL hImageData = ::GlobalAlloc( GMEM_MOVEABLE, dwResourceSize );
if ( hImageData == NULL )
{
return;
}
LPVOID pvImageBuffer = ::GlobalLock( hImageData );
if ( pvImageBuffer != nullptr )
{
::CopyMemory( pvImageBuffer, pvImageResourceData, dwResourceSize );
::GlobalUnlock( hImageData );
IStream* pStream = nullptr;
if ( SUCCEEDED( ::CreateStreamOnHGlobal( hImageData, TRUE, &pStream ) ) )
{
img.Load( pStream );
pStream->Release();
}
::GlobalUnlock( hImageData );
}
::GlobalFree( hImageData );
} // CTTSplashWnd::CreateStreamOnResource
UPDATE: I've found that even on the same machine, sometimes UpdateLayeredWindow succeeds and other time it fails (but always with code 317 if it does fail). Another piece of info is that this splash is being run on a separate UI thread. It always works on my machine though...
I'm having this same problem and I can't find any info either on it. Using SetWindowAttributes method instead works, but I want to use the SetLayeredWindow way. I'm coming to the point where I'm going to debug through the whole windows API to find out what is happening since msdn gives jack info about this error message. The only difference is that the OpenGL layered window demo example I saw which uses this method, uses CreateDIBSection instead of CreateCompatibleBitmap which seems to work on my PC.
I've had UpdateLayeredWindow() work perfectly on a Windows 7 x86_64 machine only for it to start failing when i've disabled desktop composition. It turned out that
UpdateLayeredWindow (window_handle, NULL,
&position, &size,
buffer_hdc, &buffer_offset,
0, &blend_options, ULW_ALPHA);
works, while
UpdateLayeredWindow (window_handle, NULL,
&position, NULL,
buffer_hdc, &buffer_offset,
0, &blend_options, ULW_ALPHA);
does not work and fails with error 317.
I've been trying to skip some of the arguments because i didn't need to change window size or contents, just to move it. With desktop composition disabled, this turned out to be impossible.
Not sure how this relates to your original problem, as you're also supplying the screen DC, while i don't.
I'm trying to read all pixels on a given area of a HDC to find if a color is present, currently I came up with:
IDirect3DSurface9* pSurface = 0;
p1->CreateOffscreenPlainSurface(1280, 1024,D3DFMT_A8R8G8B8, D3DPOOL_SYSTEMMEM, &pSurface, NULL);
p1->GetFrontBufferData(0, pSurface);
//assert( pSurface );
if( pSurface && GetTickCount() > dwGAKS_Fix )
{
HDC dc;
pSurface->GetDC( &dc );
COLORREF dpurp = D3DCOLOR_ARGB (255,102,0 ,153);
for( DWORD h = 610; h <= 670; h++ )
{
for( DWORD w = 480; w<=530; w++ )
{
COLORREF dwPixel = GetPixel( dc, h, w );
// CString strPixel; strPixel.Format( "Pixel col: %u at: %u X %u", dwPixel, d, i );
//if( dx_Font )
if( dwPixel == dpurp )
{
dx_Font->DrawTextA(NULL, "Shoot", strlen("Shoot"), &pos, DT_NOCLIP, D3DCOLOR_XRGB(0, 255, 0));
}
else
dx_Font->DrawTextA(NULL, "NoShoot", strlen("NoShoot"), &pos, DT_NOCLIP, D3DCOLOR_XRGB(0, 255, 0));
}
}
dwGAKS_Fix = GetTickCount() + 15;
pSurface->ReleaseDC( dc );
pSurface->Release();
But this solution is slow, very slow, I need something somewhat more..uh professional
edit
D3DLOCKED_RECT d3dlocked;
if( D3D_OK == pSurface->LockRect( &d3dlocked, 0, 0 ) )
{
UINT *pixels=(UINT *)locked.pBits;
if(pixels[52+15*1024]&0xFFFFFF00==dpurp)
{
}
pSurface->UnlockRect();
}
GetPixel is always slow. You can get direct access to the bits in the off-screen surface using IDirect3DSurface9::LockRect and then scan through the bitmap yourself, which should be much quicker.
(Edit) Any given pixel (x,y) is the 32 bit value found at:
*(DWORD*)(((BYTE*)d3dlocked.pBits) + y * d3dlocked.Pitch + x * sizeof(DWORD));
You should AND the value with 0x00ffffff to ignore the alpha channel.