WinAPI Polygon Deform on Rotation - c++

Got a bit stuck on problem with my polygon(square), it's deforming on rotation, tried standart function SetWorldTransform, but got disappointed.
Rotation function is OK. Possibly, the main problem is in the error after every rotate.
int xCenter = 105;
int yCenter = 105;
POINT pnts[5];
square()
{
pnts[0].x = 70;
pnts[0].y = 70;
pnts[1].x = 140;
pnts[1].y = 70;
pnts[2].x = 140;
pnts[2].y = 140;
pnts[3].x = 70;
pnts[3].y = 140;
pnts[4].x = 70;
pnts[4].y = 70;
}
void Drawsquare(HWND hWin)
{
HDC hdc;
HBRUSH hBrush;
HPEN hPen;
LOGBRUSH lBrush;
hdc = GetDC(hWin);
lBrush.lbStyle = BS_HOLLOW;
hBrush = CreateBrushIndirect(&lBrush);
hPen = CreatePen(PS_SOLID, 1, RGB(0, 0, 0));
SelectObject(hdc, hBrush);
SelectObject(hdc, hPen);
Polygon(hdc, pnts, 5);
ReleaseDC(hWin, hdc);
}
void Rotate(HWND hWin)
{
HDC hdc;
RECT rect;
hdc = GetDC(hWin);
double pi = acos(-1);
double ang = 45 * pi / 180;
for(int i = 0; i < 5; i++)
{
pnts[i].x = (pnts[i].x - xCenter)*cos(ang) - (pnts[i].y - yCenter)*sin(ang) + xCenter;
pnts[i].y = (pnts[i].x - xCenter)*sin(ang) + (pnts[i].y - yCenter)*cos(ang) + yCenter;
}
GetClientRect(hWin, &rect);
ClearScreen(hdc, rect);
Drawsquare(hWin);
ReleaseDC(hWin, hdc);
}

Store your points into custom point structure with doubles instead ints use this type instead of POINT for all the logic operations
struct PrecisePoint
{
double x;
double y;
}
then copy them into POINT array right before Polygon(hdc, pnts, 5);
you can add method like:
precisePointsToPoints( PrecisePoint[] src, POINT[] dst, length);

Your coordinates are being rotated as floating point numbers, then coerced into integers for storing back into POINT structures. The truncation errors are accumulating and causing the distortion.

Related

Win32 GDI: AlphaBlend() not using constant alpha value correctly

The code provided at the end draws a grid of red 3x3px rectangles with a random constant alpha value using AlphaBlend(). The output however, turns out not "quite" random:
Notice runs of constant alpha along x-axis.
What might be causing this?
P.S. Stepping though the debugger produces the expected output.
Code to produce output:
void draw_mark(HDC hdc, int x, int y,
COLORREF mark_clr, int mark_w, int mark_h, BYTE alpha);
void produce_output(HWND hWnd) {
InvalidateRect(hWnd, NULL, TRUE);
UpdateWindow(hWnd);
const int grid_w = 64, grid_h = 64;
const int mark_sz = 3;
HDC hdc = GetDC(hWnd);
for(int y = 0; y < grid_h; ++y) {
for(int x = 0; x < grid_w; ++x) {
BYTE rnd_alpha = rand(); // use random alpha for each mark
draw_mark(hdc, x * mark_sz, y * mark_sz,
RGB(255,0,0), mark_sz, mark_sz, rnd_alpha);
}
}
// clean-up
ReleaseDC(hWnd, hdc);
}
// draws a [mark_w x mark_h] rectangle at (x,y) with alpha
void draw_mark(HDC hdc, int x, int y,
COLORREF mark_clr, int mark_w, int mark_h, BYTE alpha)
{
HDC hdcMem = CreateCompatibleDC(NULL);
HBITMAP hbm = CreateCompatibleBitmap(hdc, mark_w, mark_h);
HGDIOBJ hOldBmp = SelectObject(hdcMem, hbm);
for(int x = 0; x < mark_w; ++x) {
for(int y = 0; y < mark_h; ++y) {
SetPixel(hdcMem, x, y, mark_clr);
}
}
POINT marker_center{mark_w / 2, mark_h / 2};
SetPixel(hdcMem, marker_center.x, marker_center.y, RGB(255, 255, 255));
BLENDFUNCTION bf{};
bf.BlendOp = AC_SRC_OVER;
bf.BlendFlags = 0;
bf.AlphaFormat = 0; // ignore source per-pixel alpha and...
bf.SourceConstantAlpha = alpha; // ...use constant alpha provided instead
AlphaBlend(hdc,
x - marker_center.x, y - marker_center.y,
mark_w, mark_h,
hdcMem, 0, 0, mark_w, mark_h, bf);
// clean-up
SelectObject(hdcMem, hOldBmp);
DeleteObject(hbm);
DeleteDC(hdcMem);
};
EDIT - As I look more into it, here are the additional issues I have noticed:
1- Output is normal when AlphaBlend() destination is a memory DC, but not when a window DC. So the issue has to do with bliting directly to screen.
2- Corrupt output is unrelated to use of rand() function. Replacing BYTE rnd_alpha = rand(); with ++alpha also produces somewhat similar corrupt outputs.
3- More interestingly, suspending the thread in the inner loop such as Sleep(some_duration) seems to reduce the corruption. Higher the some_duration, less the corruption. Here is a sample output:
First output is generated by first blitting to a memory DC, then to window. The rest is directly to the window. Notice how corruption increases(i.e. output becomes less random) as thread suspension time decreases.

GetPixel() not working correctly Windows API C++

I'm writing a program that reads each pixel of a window and store it in an array of bytes as black and white, each bit of the bytes is a black/white value.
But GetPixel() doesn't seem to work the way I expected. Here's the part of the code for reading pixels and storing them:
byte *colors = new byte[250000 / 8 + 1];
ZeroMemory(colors, 250000 / 8 + 1);
HDC hdc = GetDC(hwnd);
HDC memDC = CreateCompatibleDC(hdc);
HBITMAP memBitmap = CreateCompatibleBitmap(hdc, 500, 500);
SelectObject(memDC, memBitmap);
BitBlt(memDC, 0, 0, 500, 500, hdc, 0, 0, SRCCOPY);
for (int y = 0; y < 500; y++) {
for (int x = 0; x < 500; x++) {
COLORREF pxcolor = GetPixel(memDC, x, y);
if (pxcolor == CLR_INVALID) {
MessageBox(hwnd, _T("Oops..."), NULL, NULL);
}
int r = GetRValue(pxcolor);
int g = GetGValue(pxcolor);
int b = GetBValue(pxcolor);
int average = (r + g + b) / 3;
bool colorBW = average >= 128;
int currentIndex = y * 500 + x;
if (colorBW) {
SetBit(colors, currentIndex);
}
}
}
ReleaseDC(hwnd, hdc);
DeleteDC(memDC);
DeleteObject(memBitmap);
delete[] colors;
SetBit():
inline VOID SetBit(byte *bytes, int index, bool state = true) {
byte byteToSet = bytes[index / 8];
int bitNumber = index % 8;
bytes[index / 8] = state ? (byteToSet | (0b1000'0000 >> bitNumber)) : (byteToSet & ((0b1111'1111 >> (bitNumber + 1)) | (0b1111'1111 << (8 - bitNumber - 1))));
}
Every pixel read in by GetPixel() seems to give me 0x000000, or pure black.
My code used to call GetPixel() with the first parameter being hdc, without all the bitmap and memory DC stuff, but that way every pixel returns CLR_INVALID. I came across this question, and the above code is after I have changed it into using memory DCs and bitmaps. But it just went from returning CLR_INVALID to 0x000000 for each pixel.
If I add this line before I use GetPixel():
SetPixel(memDC, x, y, RGB(255, 255, 255));
GetPixel() returns the correct result. Why is it functioning this way?

Given just a HBITMAP, how to draw to it?

I'm an absolute beginner at this but have managed to blunder my way to 93% of where I want to be. Need help for the final 7%.
I've manually created a bitmap like so:
BITMAPINFO bmpInfo = { 0 };
BITMAPINFOHEADER bmpInfoHeader = { 0 };
BITMAP ImageBitmap;
void *bits;
bmpInfoHeader.biSize = sizeof(BITMAPINFOHEADER);
bmpInfoHeader.biBitCount = 32;
bmpInfoHeader.biClrImportant = 0;
bmpInfoHeader.biClrUsed = 0;
bmpInfoHeader.biCompression = BI_RGB;
bmpInfoHeader.biHeight = -IMAGE_DISPLAY_HEIGHT;
bmpInfoHeader.biWidth = IMAGE_DISPLAY_WIDTH;
bmpInfoHeader.biPlanes = 1;
bmpInfoHeader.biSizeImage = IMAGE_DISPLAY_WIDTH * IMAGE_DISPLAY_HEIGHT * 4;
ZeroMemory(&bmpInfo, sizeof(bmpInfo));
bmpInfo.bmiHeader = bmpInfoHeader;
bmpInfo.bmiColors->rgbBlue = 0;
bmpInfo.bmiColors->rgbGreen = 0;
bmpInfo.bmiColors->rgbRed = 0;
bmpInfo.bmiColors->rgbReserved = 0;
g_hImageBitmap = CreateDIBSection(hDC, &bmpInfo, DIB_RGB_COLORS, &bits, NULL, 0);
GetObject(g_hImageBitmap, sizeof(BITMAP), &ImageBitmap);
for (i = 0; i < IMAGE_DISPLAY_WIDTH; i++) {
for (j = 0; j < IMAGE_DISPLAY_HEIGHT; j++) {
((unsigned char *)bits)[j*IMAGE_DISPLAY_WIDTH * 4 + i * 4] = 255; // Blue
((unsigned char *)bits)[j*IMAGE_DISPLAY_WIDTH * 4 + i * 4 + 1] = 255; // Green
((unsigned char *)bits)[j*IMAGE_DISPLAY_WIDTH * 4 + i * 4 + 2] = 255; // Red
((unsigned char *)bits)[j*IMAGE_DISPLAY_WIDTH * 4 + i * 4 + 3] = 0;
}
}
g_ImageBitmapPixels = bits;
and elsewhere WM_PAINT handles drawing this like so
hdc = BeginPaint(hwnd, &ps);
if (g_hImageBitmap != NULL) {
GetObject(g_hImageBitmap, sizeof(BITMAP), &bm);
hOldBitmap = (HBITMAP)SelectObject(hMemoryDC, g_hImageBitmap);
BitBlt(hdc, UPPER_LEFT_IMAGE_X, UPPER_LEFT_IMAGE_Y,
bm.bmWidth, bm.bmHeight, hMemoryDC, 0, 0, SRCCOPY);
SelectObject(hMemoryDC, hOldBitmap);
}
Given the global variable g_ImageBitmapPixels other parts of the program can change and manipulate individual pixels in the bitmap, and when that happens, I use
InvalidateRect(hwnd, &RECT_ImageUpdate_Window, TRUE);
UpdateWindow(hwnd);
to update just that little portion of the screen. Works great. Hooray for me.
To get to the point, my question is, if a function has ONLY the HBITMAP (g_hImageBitmap) ... is there a way to call the Windows library functions to draw lines, text, circles, filled circles, to the HBITMAP? Like these functions
MoveToEx(hDC, x1, y1, NULL);
LineTo(hDC, x2, y2 );
HBRUSH hRedBrush = CreateSolidBrush(RGB(255, 0, 0));
FillRect(hDC, &somerectangle, hRedBrush);
except instead of needing a device context, they just take the HBITMAP?
I have a pointer to the actual pixels (g_ImageBitmapPixels) so I could just write my own line drawing, circle drawing, rectangle filling functions. Indeed I have done that, but it seems a shame not to use the functions Microsoft so kindly provides. Also, I'm not smart enough to make my own text-drawing functions.
Thank you for your help.

How to extract bitmap from spritesheet in Win32 C++?

I'm trying to load individual cards from a spritesheet of cards based on suit and rank but I'm unsure of how to construct a new Bitmap object from cutting out Rectangle coordinates in the source image. I'm using <windows.h> currently and trying to find a simple way to accomplish this. I'm looking for something like this:
HBITMAP* twoOfHearts = CutOutFromImage(sourceImage, new Rectangle(0, 0, 76, 116));
From source: http://i.stack.imgur.com/WZ9Od.gif
Here's a function I played with the other week for more-or-less this same task. In my case, I wanted to return a HBRUSH that could be used with FillRect. In that instance, we still need to create a bitmap of the area of interest, before then going on to create a brush from it.
In your case, just return the dstBmp instead. spriteSheet is a global that has had a 256x256 spritesheet loaded. I've hardcoded the size of my sprites to 16x16, you'd need to change that to something like 81x117.
Here's some code that grabs a copy of the required area and some more that uses these 'stamps' to draw a level map. That said - there are all kinds of problems with this approach. Speed is one, excessive work is another one that impacts on the first. Finally, scrolling a window drawn like this produces artefacts.
// grabs a 16x16px section from the spriteSheet HBITMAP
HBRUSH getSpriteBrush(int col, int row)
{
HDC memDC, dstDC, curDC;
HBITMAP oldMemBmp, oldDstBmp, dstBmp;
curDC = GetDC(HWND_DESKTOP);
memDC = CreateCompatibleDC(NULL);
dstDC = CreateCompatibleDC(NULL);
dstBmp = CreateCompatibleBitmap(curDC, 16, 16);
int xOfs, yOfs;
xOfs = 16 * col;
yOfs = 16 * row;
oldMemBmp = (HBITMAP)SelectObject(memDC, spriteSheet);
oldDstBmp = (HBITMAP)SelectObject(dstDC, dstBmp);
BitBlt(dstDC,0,0,16,16, memDC, xOfs,yOfs, SRCCOPY);
SelectObject(memDC, oldMemBmp);
SelectObject(dstDC, oldDstBmp);
ReleaseDC(HWND_DESKTOP, curDC);
DeleteDC(memDC);
DeleteDC(dstDC);
HBRUSH result;
result = CreatePatternBrush(dstBmp);
DeleteObject(dstBmp);
return result;
}
void drawCompoundSprite(int x, int y, HDC paintDC, char *tileIndexes, int numCols, int numRows)
{
int mapCol, mapRow;
HBRUSH curSprite;
RECT curDstRect;
for (mapRow=0; mapRow<numRows; mapRow++)
{
for (mapCol=0; mapCol<numCols; mapCol++)
{
int curSpriteIndex = tileIndexes[mapRow*numCols + mapCol];
int spriteX, spriteY;
spriteX = curSpriteIndex % 16;
spriteY = curSpriteIndex / 16;
curDstRect.left = x + 16*mapCol;
curDstRect.top = y + 16 * mapRow;
curDstRect.right = curDstRect.left + 16;
curDstRect.bottom = curDstRect.top + 16;
curSprite = getSpriteBrush(spriteX, spriteY);
FillRect(paintDC, &curDstRect, curSprite);
DeleteObject(curSprite);
}
}
}
The latter function has since been replaced with the following:
void drawCompoundSpriteFast(int x, int y, HDC paintDC, unsigned char *tileIndexes, int numCols, int numRows, pMapInternalData mData)
{
int mapCol, mapRow;
HBRUSH curSprite;
RECT curDstRect;
HDC memDC;
HBITMAP oldBmp;
memDC = CreateCompatibleDC(NULL);
oldBmp = (HBITMAP)SelectObject(memDC, mData->spriteSheet);
for (mapRow=0; mapRow<numRows; mapRow++)
{
for (mapCol=0; mapCol<numCols; mapCol++)
{
int curSpriteIndex = tileIndexes[mapRow*numCols + mapCol];
int spriteX, spriteY;
spriteX = curSpriteIndex % mData->tileWidth;
spriteY = curSpriteIndex / mData->tileHeight
// Draw sprite as-is
// BitBlt(paintDC, x+16*mapCol, y+16*mapRow,
// mData->tileWidth, mData->tileHeight,
// memDC,
// spriteX * 16, spriteY*16,
// SRCCOPY);
// Draw sprite with magenta rgb(255,0,255) areas treated as transparent (empty)
TransparentBlt(
paintDC,
x+mData->tileWidth*mapCol, y+mData->tileHeight*mapRow,
mData->tileWidth, mData->tileHeight,
memDC,
spriteX * mData->tileWidth, spriteY*mData->tileHeight,
mData->tileWidth, mData->tileHeight,
RGB(255,0,255)
);
}
}
SelectObject(memDC, oldBmp);
DeleteObject(memDC);
}

Win32 GDI Drawing a circle?

I am trying to draw a circle and I am currently using the Ellipse() function.
I have the starting mouse coordinates - x1 and y1 and the ending coordinates x2 and y2. As you can see, I am forcing the y2(temp_shape.bottom) to be = y1+(x2-x1). This doesn't work as intended. I know the calculation is completely wrong but any ideas on what is right?
Code Below.
case WM_PAINT:
{
hdc = BeginPaint(hWnd, &ps);
// TODO: Add any drawing code here...
RECT rect;
GetClientRect(hWnd, &rect);
HDC backbuffDC = CreateCompatibleDC(hdc);
HBITMAP backbuffer = CreateCompatibleBitmap( hdc, rect.right, rect.bottom);
int savedDC = SaveDC(backbuffDC);
SelectObject( backbuffDC, backbuffer );
HBRUSH hBrush = CreateSolidBrush(RGB(255,255,255));
FillRect(backbuffDC,&rect,hBrush);
DeleteObject(hBrush);
//Brush and Pen colours
SelectObject(backbuffDC, GetStockObject(DC_BRUSH));
SetDCBrushColor(backbuffDC, RGB(255,0,0));
SelectObject(backbuffDC, GetStockObject(DC_PEN));
SetDCPenColor(backbuffDC, RGB(0,0,0));
//Shape Coordinates
temp_shape.left=x1;
temp_shape.top=y1;
temp_shape.right=x2;
temp_shape.bottom=y2;
//Draw Old Shapes
//Rectangles
for ( int i = 0; i < current_rect_count; i++ )
{
Rectangle(backbuffDC, rect_list[i].left, rect_list[i].top, rect_list[i].right, rect_list[i].bottom);
}
//Ellipses
for ( int i = 0; i < current_ellipse_count; i++ )
{
Ellipse(backbuffDC, ellipse_list[i].left, ellipse_list[i].top, ellipse_list[i].right, ellipse_list[i].bottom);
}
if(mouse_down)
{
if(drawCircle)
{
temp_shape.right=y1+(x2-x1);
Ellipse(backbuffDC, temp_shape.left, temp_shape.top, temp_shape.right, temp_shape.bottom);
}
if(drawRect)
{
Rectangle(backbuffDC, temp_shape.left, temp_shape.top, temp_shape.right, temp_shape.bottom);
}
if(drawEllipse)
{
Ellipse(backbuffDC, temp_shape.left, temp_shape.top, temp_shape.right, temp_shape.bottom);
}
}
BitBlt(hdc,0,0,rect.right,rect.bottom,backbuffDC,0,0,SRCCOPY);
RestoreDC(backbuffDC,savedDC);
DeleteObject(backbuffer);
DeleteDC(backbuffDC);
EndPaint(hWnd, &ps);
}
break;
If you want Ellipse() to draw a perfectly round circle, you need to give it coordinates for a perfectly square shape, not a rectangular shape.
Assuming x1,y1 are the starting coordinates of the dragging and x2,y2 are the current mouse coordinates, then try this:
//Shape Coordinates
temp_shape.left = min(x1, x2);
temp_shape.top = min(y1, y2);
temp_shape.right = max(x1, x2);
temp_shape.bottom = max(y1, y2);
...
if (drawCircle)
{
int length = min(abs(x2-x1), abs(y2-y1));
if (x2 < x1)
temp_shape.left = temp_shape.right - length;
else
temp_shape.right = temp_shape.left + length;
if (y2 < y1)
temp_shape.top = temp_shape.bottom - length;
else
temp_shape.bottom = temp_shape.top + length;
Ellipse(backbuffDC, temp_shape.left, temp_shape.top, temp_shape.right, temp_shape.bottom);
}
I have worked out a calculation which works better. Pasted below for anyone else wanting the same.
if(drawSquare)
{
int xdiff = abs(x2-x1);
int ydiff=abs(y2-y1);
if(xdiff>ydiff)
{
if(y2>y1)
temp_shape.bottom=y1+xdiff;
else
temp_shape.bottom=y1-xdiff;
}
else
{
if(x2>x1)
temp_shape.right=x1+ydiff;
else
temp_shape.right=x1-ydiff;
}
Rectangle(backbuffDC, temp_shape.left, temp_shape.top, temp_shape.right, temp_shape.bottom);
}
Your code is unnecessarily complex with temp DCs and back buffers and you are recreating GDI brushes in every WM_PAIT? But that's besides the point. Here is the question : why do you do this:
temp_shape.right=y1+(x2-x1); //basing horizontal coordinate on vertical?
What framework stack is this on top of? .NET, then why don't you use built-in double buffering?