Drawing polygons using multithreading in C++ - c++

I am drawing polygons (Polygon(dc, points, 3)) using in WM_PAINT event using C++. I have a big number of polygons so I am trying to implement multithreading. I am running VS2013 so I have included thread. I have created a function I want to run on a thread:
void anyFunс2(HDC dc, int index)
{
POINT points[3];
for (unsigned i = index; i < model->vertexFaces.size(); i += fs)
{
// Here we convert our points to Clip and Windowed Coordinates
// and only then fetch the results
if (Algorithms::FetchPolygons(&model->finalizedVertices[0], model->vertexFaces[i], points))
{
Polygon(dc, points, 3);
}
}
}
For instance I have three threads. I have designed the code the way where each thread renders every third element. For example first thread renders 0,3,6,9 polygons, second thread renders 1,4,7,10 and the final thread renders 2,5,8,11 polygons.
Here's my WM_PAINT event:
case WM_PAINT:
{
// Get dc here
hdc = GetDC(hWnd);
// Create a backbuffer here
bdc = CreateCompatibleDC(hdc);
// Get the screen dimensions
RECT client;
GetClientRect(hWnd, &client);
// Create bitmap
HBITMAP backBuffer = CreateCompatibleBitmap(hdc, client.right - client.left, client.bottom - client.top);
// Release it, because we no longer need it
hdc = NULL;
ReleaseDC(hWnd, hdc);
// Select the back dc as a current one and specify the screen dimensions
HPEN hPen = CreatePen(PS_SOLID, 1, RGB(0, 25, 205));
HBRUSH hBrush = CreateSolidBrush(RGB(0, 0, 55));
SelectObject(bdc, hPen);
SelectObject(bdc, hBrush);
SelectObject(bdc, backBuffer);
Rectangle(bdc, client.left, client.top, client.right, client.bottom);
// Call some threads to draw polygons on our BDC
for (unsigned i = 0; i < func_threads.size(); i++)
{
func_threads.at(i) = thread(anyFunс2, bdc, i);
}
// Wait until all threads finish their job
for (unsigned i = 0; i < func_threads.size(); i++)
{
if (func_threads[i].joinable()) func_threads[i].join();
}
// Swap buffers
hdc = BeginPaint(hWnd, &ps);
BitBlt(hdc, client.left, client.top, client.right, client.bottom, bdc, 0, 0, SRCCOPY);
EndPaint(hWnd, &ps);
// Delete all created objects from memory
DeleteObject(backBuffer);
DeleteObject(hBrush);
DeleteObject(hPen);
DeleteDC(bdc);
break;
}
As you can see I run these threads in the loop. Then I have another loop where the Join() method is located for each thread. These threads draw polygons to the same HDC (I assume). After the main thread have finished waiting for all these threads it copies everything from the back buffer to the main one. However the problem is that the object is not fully drawn. I mean not all polygons are drawn. The link to the image is attached here. Please help me, why is it happening like that?!

The short answer is that GDI simply isn't designed to support drawing from multiple threads into the same DC simultaneously.
That leaves you with a few choices. The most direct would be to use PolyPolygon to draw all your polygons (or at least large numbers of them) in a single call. This seems particularly relevant in your case--from the looks of things, you're drawing lots of triangles, so a lot of the time taken is probably just the overhead to call the Polygon function, and not really time for Polygon to execute.
Another possibility would be to create a separate back-buffer for each thread to draw into, then use BitBlit to combine those together with (for example) an OR function, so you get the same overall effect as the original drawing.
The third (and probably best) way to support drawing a large number of polygons would be to switch from using GDI to using something like DirectX or OpenGL that's designed from the ground up to support exactly that.

You can use CreateDIBsection() to draw shapes with multiple threads.
Just have each thread draw pixels directly to the DIB. Then when all the threads are complete, you can bitBlt the DIB to the screen.

Related

Using Coordinate Spaces and Transformations to scroll and scale Enhanced Windows Metafile

INTRODUCTION:
I am building small app that displays .emf (Enhanced Windows Metafile) in a window.
If image can not fit inside window, scroll bars will be used to show the invisible parts.
Since I am dealing with metafiles, I am trying to add zooming as well.
RELEVANT INFORMATION:
I am playing an .emf file (from the disk) in memory device context (created with the CreateCompatibleDC API). Then I use BitBlt API to transfer that image into main window's client area. I am doing all this to avoid flickering.
Reading through the MSDN I found documentation about Using Coordinate Spaces and Transformations and immediately realized its potential for solving my task of scaling / scrolling the metafile.
PROBLEM:
I do not know how to use before mentioned APIs to scale / scroll metafile inside memory device context, so I can BitBlt that image into main window's device context (this is my first time tackling this type of task).
MY EFFORTS TO SOLVE THE PROBLEM:
I have experimented with XFORM matrix to achieve scaling, like this:
case WM_ERASEBKGND: // prevents flickering
return 1L;
case WM_PAINT:
{
hdc = BeginPaint(hWnd, &ps);
// get main window's client rectangle
RECT rc = { 0 };
GetClientRect(hWnd, &rc);
// fill rectangle with gray brush
// this is necessery because I have bypassed WM_ERASEBKGND,
// see above
FillRect(hdc, &rc, (HBRUSH)GetStockObject(LTGRAY_BRUSH));
// OK, here is where I tried to tamper with the APIs
// I mentioned earlier
// my goal would be to scale EMF down by half
int prevGraphicsMode = SetGraphicsMode(hdc, GM_ADVANCED);
XFORM zoomMatrix = { 0 };
zoomMatrix.eDx = 0;
zoomMatrix.eDy = 0;
zoomMatrix.eM11 = 0.5;
zoomMatrix.eM12 = 0;
zoomMatrix.eM21 = 0;
zoomMatrix.eM22 = 0.5;
// apply zooming factor
SetWorldTransform(hdc, &zoomMatrix);
// draw image
HENHMETAFILE hemf = GetEnhMetaFile(L".\\Example.emf");
PlayEnhMetaFile(hdc, hemf, &rc);
DeleteEnhMetaFile(hemf);
// restore original graphics mode
SetGraphicsMode(hdc, prevGraphicsMode);
// all done, end painting
EndPaint(hWnd, &ps);
}
return 0L;
In the above snippet, metafile was scaled properly, and was played from the top left corner of the client area.
I didn't bother with maintaining aspect ratio nor centering the image. My main goal for now was to figure out how to use XFORM matrix to scale metafile.
So far so good, at least so I thought.
I tried doing the same as above for the memory device context, but when BitBliting the image I got horrible pixelation, and BitBlitted image was not properly scaled.
Below is the small snippet that reproduces the above image:
static HDC memDC; // in WndProc
static HBITMAP bmp, bmpOld; // // in WndProc; needed for proper cleanup
case WM_CREATE:
{
HDC hdc = GetDC(hwnd);
// create memory device context
memDC = CreateCompatibleDC(hdc);
// get main window's client rectangle
RECT rc = { 0 };
GetClientRect(hwnd, &rc);
// create bitmap that we will draw on
bmp = CreateCompatibleBitmap(hdc, rc.right - rc.left, rc.bottom - rc.top);
// select bitmap into memory device context
bmpOld = (HBITMAP)SelectObject( memDC, bmp );
// fill rectangle with gray brush
FillRect(memDC, &rc, (HBRUSH)GetStockObject(LTGRAY_BRUSH));
// scale EMF down by half
int prevGraphicsMode = SetGraphicsMode(memDC, GM_ADVANCED);
XFORM zoomMatrix = { 0 };
zoomMatrix.eDx = 0;
zoomMatrix.eDy = 0;
zoomMatrix.eM11 = 0.5;
zoomMatrix.eM12 = 0;
zoomMatrix.eM21 = 0;
zoomMatrix.eM22 = 0.5;
// apply zooming factor
SetWorldTransform(memDC, &zoomMatrix);
// draw image
HENHMETAFILE hemf = GetEnhMetaFile(L".\\Example.emf");
PlayEnhMetaFile(memDC, hemf, &rc);
DeleteEnhMetaFile(hemf);
// restore original graphics mode
SetGraphicsMode(memDC, prevGraphicsMode);
// all done end paint
ReleaseDC(hwnd, hdc);
}
return 0L;
case WM_PAINT:
{
PAINTSTRUCT ps;
HDC hdc = BeginPaint(hwnd, &ps);
RECT rc = {0};
GetClientRect(hwnd, &rc);
BitBlt(hdc, 0, 0,
rc.right - rc.left,
rc.bottom - rc.top,
memDC, 0, 0, SRCCOPY);
EndPaint(hwnd, &ps);
}
return 0L;
case WM_DESTROY:
SelectObject(memDC, bmpOld);
DeleteObject(bmp);
DeleteDC(memDC);
PostQuitMessage(0);
break;
After rereading carefully documentation for the BitBlit I have found the following important segment:
If other transformations exist in the source device context (and a matching transformation is not in effect in the destination device context), the rectangle in the destination device context is stretched, compressed, or rotated, as necessary.
Using ModifyWorldTransform(memDC, NULL, MWT_IDENTITY); as user Jonathan Potter suggested fixed this problem.
Those are my tries as far as scaling is concerned. Then I tried to implement scrolling by experimenting with the SetWindowOrgEx and similar APIs.
I have managed to move the image with few experiments, but I haven't fully grasped why and how that worked.
The reason for that is that I was unable to fully understand terms like window and viewport origins and similar. It is just too abstract for me at the moment. As I write this post, I am rereading again and trying to solve this on my own.
QUESTIONS:
How can I use the APIs (from the link I added above) to scale/scroll/scale and scroll metafile in memory DC, and properly BitBlt it in the main window device context?
REMARKS:
I realize that code example could be large, so I do not ask any. I do not want people to write the code for me, but to help me understand what I must do. I just need to fully grasp the concept of applying the above APIs the way I need. Therefore answers/comments can include instructions and small pseudo code, if found appropriate. I realize that the question might be broad, so I would appreciate if you could help me narrow down my problem with constructive criticism and comments.
Thank you for reading this post, offering help, and your understanding.
Best regards.
According to the SetWorldTransform documentation:
it will not be possible to reset the graphics mode for the device
context to the default GM_COMPATIBLE mode, unless the world
transformation has first been reset to the default identity
transformation
So even though you are attempting to reset the graphics mode, with this call:
SetGraphicsMode(memDC, prevGraphicsMode);
This is actually not working, because you are not first resetting the transformation to the identity transform. Adding the following line before the SetGraphicsMode call resets the transform and returns the DC to normal mapping:
ModifyWorldTransform(memDC, NULL, MWT_IDENTITY);
SetGraphicsMode(memDC, prevGraphicsMode);

MSPaint-like app writing. How to do BitBlt right?

I'm writing now simple mspaint-like program in C++ using windows.h (GDI). For my program I need only pen tool. So, I need to store somewhere main window's picture (for ex. in memory HDC and HBITMAP) to draw it after in WM_PAINT message.
When I first have to store window's HDC to my memory HDC and HBITMAP? In what message I should store window? For example, I think we can't do it in WM_CREATE because we have no window yet.
What is the difference between PatBlt and BitBlt? What should I use for my app?
How to copy window's HDC content to my memory HDC and Bitmap? I'm trying to do something like this:
LPRECT lpRect;
GetClientRect(hwnd, lpRect);
width = lpRect->right - lpRect->left;
height = lpRect->bottom - lpRect->top;
HDC hDC = GetDC(hwnd);
memoryDC = CreateCompatibleDC(hDC);
memoryBitmap = CreateCompatibleBitmap(hDC, width, height);
SelectObject(memoryDC, memoryBitmap);
PatBlt(memoryDC, 0, 0, width, height, PATCOPY);
ReleaseDC(hwnd, hDC);
But this don't work: program crashes.
How to restore window in WM_PAINT after that?
How to clear my window with white color?
1: I would recommend you lazy load your off-screen canvas as late as possible. If you need it in WM_PAINT and you haven't created it yet, create it then. If you need it at the point someone begins drawing, create it then. If it exists when you need it, then use it.
2: PatBlt fills a region of a bitmap using the device context's current brush. Brushes define patterns, which is why it's called PatBlt. BitBlt copies data from a source bitmap to a destination bitmap. You would use a BitBlt when you wanted to move the image from the off-screen bitmap to the frame buffer.
3: The lpRect parameter of GetClientRect is an output parameter. That means you have to supply the memory. In this case, GetClientRect is trying to write the rectangle to a null pointer and causing the crash.
RECT clientRect;
GetClientRect(hwnd, &clientRect);
width = clientRect.right - clientRect.left;
height = clientRect.bottom - clientRect.top;
WM_PAINT: seems to be the best place to create the memory hdc. You can do something like this
WM_PAINT:
if (!first_paint)
{
...code
first_paint = true;
}
...more code
break;

D3D11: How to draw GDI Text to a GXDI Surface? (Without D2D)

I need some help with drawing a text to a texture with GDI and D3D11. I tried using D2D/DirectWrite, but it supports just D3D10 and not D3D11 as I need. Everything I tried failed so far...
Now I want to use GDI methodes to write in the texture.
So I created a texture with this params:
Usage = D3D11_USAGE_DEFAULT;
Format = DXGI_FORMAT_B8G8R8A8_UNORM;
BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET;
CPUAccessFlags = 0;
MiscFlags = D3D11_RESOURCE_MISC_GDI_COMPATIBLE
Then I created a normal RenderTargetView from this texture as Microsoft sais here: http://msdn.microsoft.com/en-us/library/ff476203%28v=vs.85%29.aspx
Next Step: Get The DXGI Interface:
m_pTexFSText->QueryInterface(__uuidof(IDXGISurface1), (void **)(&m_pDXGISurface));
On the Render function I do just this:
m_pDeviceContext->OMSetRenderTargets(1,&m_pTextRenderTarget,NULL);
HDC hDc = NULL;
if(FAILED(m_pDXGISurface->GetDC(TRUE,&hDc)))
return E_FAIL;
COLORREF bla = SetPixel(hDc,1,1,RGB(255,255,255));
bool hmm = TextOutA(hDc, 10, 10, "LALALA!", 7);
if(FAILED(m_pDXGISurface->ReleaseDC(NULL)))
return E_FAIL;
The problem is, that the texture is still empty after that GDI drawing (Also tested with PIX).
Everything works and there are no error messages.
I hope that anybody can explain how it works.
Thanks, Stefan
EDIT: Tried it also with GetDC(FALSE,&hDc) (according to the documentation): same results -> nothing.
I actually fought this problem a lot during last week - but I've got it all working! Here is a list of things you should know/do to make it all work:
Check the surface requirements for a GetDC method to work here: http://msdn.microsoft.com/en-us/library/windows/desktop/ff471345(v=vs.85).aspx
Keep the following in mind when using this method:
•You must create the surface by using the D3D11_RESOURCE_MISC_GDI_COMPATIBLE flag for a surface or by using the DXGI_SWAP_CHAIN_FLAG_GDI_COMPATIBLE flag for swap chains, otherwise this method fails.
•You must release the device and call the IDXGISurface1::ReleaseDC method before you issue any new Direct3D commands.
•This method fails if an outstanding DC has already been created by this method.
•The format for the surface or swap chain must be DXGI_FORMAT_B8G8R8A8_UNORM_SRGB or DXGI_FORMAT_B8G8R8A8_UNORM.
•On GetDC, the render target in the output merger of the Direct3D pipeline is unbound from the surface. You must call the ID3D11DeviceContext::OMSetRenderTargets method on the device prior to Direct3D rendering after GDI rendering.
•Prior to resizing buffers you must release all outstanding DCs.
If you're going to use it in the back buffer, remember to re-bind render target after you've called ReleaseDC. It is not neccessary to manually unbind RT before calling GetDC as this method does that for you.
You can not use any Direct3D drawing between GetDC() and ReleaseDC() calls as the surface is excusively locked out by DXGI for GDI. However you can mix GDI and D3D rendering provided that you call GetDC()/ReleaseDC() every time you need to use GDI, before moving on to D3D.
This last bit may sounds easy, but you'd be surprised how many developers fall into this issue - when you draw with GDI on the back buffer, remember that this is the back buffer, not a framebuffer, so in order to actually see what you've drawn, you have to re-bind RT to OM and call swapChain->Present() method so the backbuffer will become a framebuffer and its contents will be displayed on the screen.
Maybe you're doing everything fine, it's just the text drawing doesn't do what you expect?
COLORREF bla = SetPixel(hDc,1,1,RGB(255,255,255));
bool hmm = TextOutA(hDc, 10, 10, "LALALA!", 7);
I don't understand from this how do you expect that TextOutA will guess that bla should be used as the text color. AFAIK the default text color used in the newly created/obtained DC is black. Not sure about the background fill mode, but if it's TRANSPARENT by default - this fully explains why nothing is drawing.
I'd change your code to the following:
COLORREF bla = SetPixel(hDc,1,1,RGB(255,255,255));
VERIFY(SetTextColor(hDc, bla) != CLR_INVALID);
CREct rc(0, 0, 30, 20); // put relevant coordinates
VERIFY(ExtTextOut(hDc, rc.left, rc.top, ETO_CLIPPED, &rc, "LALALA!", 7));
I am going to use it in the back buffer. I am not sure if it's done correctly. I can't see the drawing. It's showing black.
HDC GetSurfaceDC()
{
m_pSurface1 = nullptr;
HDC hdc{};
//Setup the swapchain surface
IF_FAILED_THROW_HR(m_swapChain->GetBuffer(0, IID_PPV_ARGS(&m_pSurface1)));
// Obtain the back buffer for this window which will be the final 3D render target.
ID3D11Texture2DPtr backBuffer;
IF_FAILED_THROW_HR(m_swapChain->GetBuffer(0, IID_PPV_ARGS(&backBuffer)));
// Create a descriptor for the RenderTargetView.
CD3D11_RENDER_TARGET_VIEW_DESC renderTargetViewDesc(D3D11_RTV_DIMENSION_TEXTURE2DARRAY, DXGI_FORMAT_B8G8R8A8_UNORM, 0, 0, 1);
ID3D11RenderTargetViewPtr renderTargetView;
// Create a view interface on the render target to use on bind for mono or left eye view.
IF_FAILED_THROW_HR(m_device->CreateRenderTargetView(backBuffer, &renderTargetViewDesc, &renderTargetView));
m_context->OMSetRenderTargets(1, &renderTargetView.GetInterfacePtr(), nullptr);
IF_FAILED_THROW_HR(m_pSurface1->GetDC(FALSE, &hdc));
return hdc;
}
void ReleaseSurfaceDC()
{
if (m_pSurface1 == nullptr)
return;
//When finish drawing release the DC
m_pSurface1->ReleaseDC(nullptr);
m_context->OMSetRenderTargets(1, &m_renderTargetView.GetInterfacePtr(), m_depthStencilView);
}
I have used swap chain desc:
DXGI_SWAP_CHAIN_DESC swapChainDesc = { 0 };
swapChainDesc.BufferDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
swapChainDesc.Flags = DXGI_SWAP_CHAIN_FLAG_GDI_COMPATIBLE;

C++ GDI+ drawing text on a transparent layered window

(unmanaged C++)
I already succeeded drawing PNG files to a transparent layered window that I can drag around the desktop, but now my problem is drawing text on a transparent layered window
Here's my code and my attempt at drawing text in the middle, it's important to note that i'm using the screenDC instead of using the one in WM_PAINT messages
[edit]
updated code after the comments, now i'm just trying to write text on the bitmap before getting the HBITMAP version which i need to use
this time I'm using DrawString because textout() isn't GDI+, I hope DrawString really is GDI+ lol
still doesn't work though, wonder what i'm doing wrong
void Draw() // draws a frame on the layered window AND moves it based on x and y
{
HDC screenDC( NULL ); // grab screen
HDC sourceDC( CreateCompatibleDC(screenDC) );
POINT pos = {x,y}; // drawing location
POINT sourcePos = {0,0}; // top left of image
SIZE size = {100,100}; // 100x100 image
BLENDFUNCTION blendFunction = {0};
HBITMAP bufferBitmap = {0};
Bitmap* TheBitmap = crnimage; // crnimage was already loaded earlier
// ------------important part goes here, my attempt at drawing text ------------//
Gdiplus::Graphics Gx(TheBitmap);
// Font* myFont = new Font(sourceDC);
Font myFont(L"Arial", 16);
RectF therect;
therect.Height = 20;
therect.Width = 180;
therect.X = 0;
therect.Y = 0;
StringFormat format;
format.SetAlignment(StringAlignmentCenter);
format.GenericDefault();
Gdiplus::SolidBrush GxTextBrush(Gdiplus::Color(255, 255, 0,255));
WCHAR thetext[] = L"Sample Text";
int stats = Gx.DrawString(thetext, -1, &myFont, therect, &format, &GxTextBrush);
if(stats) // DrawString returns nonzero if there is an error
msgbox(stats);
stats = Gx.DrawRectangle(&Pen(Color::Red, 3), therect);
// the rectangle and text both draw fine now
// ------------important part goes here, my attempt at drawing text ------------//
TheBitmap->GetHBITMAP(0, &bufferBitmap);
HBITMAP oldBmpSelInDC;
oldBmpSelInDC = (HBITMAP)SelectObject(sourceDC, bufferBitmap);
// some alpha blending
blendFunction.BlendOp = AC_SRC_OVER;
blendFunction.SourceConstantAlpha = wndalpha;
blendFunction.AlphaFormat = AC_SRC_ALPHA;
COLORREF colorKey( RGB(255,0,255) );
DWORD flags( ULW_ALPHA);
UpdateLayeredWindow(hWnd, screenDC, &pos, & size, sourceDC, &sourcePos,
colorKey, &blendFunction, flags);
// release buffered image from memory
SelectObject(sourceDC, oldBmpSelInDC);
DeleteDC(sourceDC);
DeleteObject(bufferBitmap);
// finally release the screen
ReleaseDC(0, screenDC);
}
I've been trying to write text on my layered window for two days now, but from those attempts I know there are several ways I can go about doing this
(unfortunately I have no idea how exactly)
The usual option I see is drawing text on a bitmap, then rendering the bitmap itself
Use Gdi+ to load a bitmap
Create a Graphics object from the bitmap
Use DrawString to write text to the bitmap
Dispose of the Graphics object
Use the bitmap Save method to save the result to a file
Apparently one can also make a graphics object from a DC, then draw text on the DC, but again i have no clue as to how to do this
The overall approach looks right, but I think you've got some problems with the DrawString call. Check out the documentation (especially the sample) on MSDN.
Gx.DrawString(thetext, 4, NULL, therect, NULL, NULL)
The third, fifth, and sixth parameters (font, format, and brush) probably need to be specified. The documentation doesn't say that they are optional. Passing NULL for these is probably causing GDI+ to treat the call as a no-op.
The second parameter should not include the terminating L'\0' in the string. It's probably safest to use -1 if your string is always terminated.

Caching a bitmap

I'm writing a Win32 application using C++.
In this application I'm handling the WM_PAINT message:
case WM_PAINT:
hdc = BeginPaint(hWnd, &ps);
GdiplusStartup(&gdiplusToken, &gdiPlusStartup, 0);
DrawM(ps.hdc, hWnd);
EndPaint(hWnd, &ps);
break;
And in the DrawM function I'm having something like this:
void DrawMap(HDC hdc, HWND hWnd)
{
if(!isDrawn)
{
// (some calculations)
Graphics g(hdc);
Bitmap img(max_x, max_y, &g);
int zoom_factor = 50;
for(int i = 0; i< segments.size(); i++)
{
// (some math)
for(int j = 0; j < segments.at(i).point_count; j++)
// (another dose of math)
g.DrawLines(&pen, segmentPoints, segments.at(i).point_count);
delete [] segmentPoints;
}
g.Save();
isDrawn = true;
}
else
{
// here is the problem
}
}
In the code above, what I wanted to do, is to render the image once, and later on when the window is resized, moved or anything happens to it that requires repainting will not render the Bitmap from scratch, instead it should use a cached one.
The problem is that Bitmap does not allow copying (the copy constructor denies it).
Another problem is that, when I'm trying to save the image to a file or a stream I receive an "Invalid parameter" error (i.e the return code is 2):
CLSID pngClsid;
GetEncoderClsid(L"image/png", &pngClsid);
img.Save(_T("m.png"), &Gdiplus::ImageFormatPNG, NULL);
->clone() also seems that it is not working, because when I define a pointer to a Bitmap, clone the bitmap to it and in the "else" statement I use:
Graphics g(hdc);
g.DrawImage(bmpClone, 50, 50);
Nothing is rendered.
Any ideas on how to cache the Bitmap?
Clone() should work, but without seeing your code (which uses it) it's hard to know what's going on. As an alternative, another (more circuitous) approach would be to call GetHBITMAP() on the original Bitmap, store the GDI bitmap handle and then construct the new Bitmap with the Bitmap(HBITMAP, HPALETTE) constructor in future repaints.
Instead of declaring img as a local object, make it a static or a member of a class. Then it will be available at the next WM_PAINT without needing to be copied.