calculation of Width for bold string - mfc

Can anyone help to me.I want to calculate width for bold string..I have calculated by using this code,but it is giving more pixels(7 pixels extra).
How can i reduce these pixels.
Example: I am having Bold String 'Intermediatery Bank:'.When i execute this code i am getting 147 pixels..but it is giving more(7 pixels extra).
int CPrintableInvoice::GetFormattedStringWidth(const CString& txt)
{
if (txt.IsEmpty())
return 0;
CFont *pOldF, *pF = GetFont();
CClientDC dc(this);
LOGFONT lf;
memset(&lf, 0, sizeof(LOGFONT));
lf.lfWeight =FW_BOLD ;
CFont newFont;
VERIFY(newFont.CreatePointFontIndirect(&lf, &dc));
pOldF = dc.SelectObject(&newFont);
CRect r;
dc.DrawText(txt, &r, DT_SINGLELINE|DT_CALCRECT);
int wid = r.Width();
dc.SelectObject(pOldF);
return wid;
}
Please help me,I am new to MFC.
Thanks,
Hareesh

Try to call:
CSize txtSize = dc.GetTextExtent(txt);
after
pOldF = dc.SelectObject(&newFont);
Hope it helps,
Vinicius

Related

Calculate the font height from its size in C++?

I am trying to verify the dependency between the CFont height and size on an example:
int main(int argc, char* argv[])
{
int myVariableFontHeight = 90;
CFont * font = new CFont();
LOGFONT lf;
memset(&lf,0,sizeof(LOGFONT));
lf.lfHeight = myVariableFontHeight;
lf.lfWeight =FW_BOLD;
lf.lfCharSet = 1;
_tcscpy_s(lf.lfFaceName , "Arial Unicode MS");
font->CreatePointFontIndirect(&lf);
font->GetLogFont(&lf);
int fontHeight = lf.lfHeight;
HWND console = GetConsoleWindow();
HDC dc = GetDC(console);
int nFontSize = -::MulDiv( lf.lfHeight, 72, ::GetDeviceCaps( dc, LOGPIXELSY ) );
delete font;
return 0;
}
And the result is always nFontSize = myVariableFontHeight/10. What is this factor 10? Where comes it from? Can I calculate the font height from a given size?
Thanks
It's in the MFC souce code. It's in the documentation. The very first line of the online documentation for CFont::CreatePointFontIndirect states:
This function is the same as CreateFontIndirect except that the
lfHeight member of the LOGFONT is interpreted in tenths of a point
rather than device units.
So, if you want to create a 10 pt font, you set lf.lfHeight to 100.

How to extract bitmap from spritesheet in Win32 C++?

I'm trying to load individual cards from a spritesheet of cards based on suit and rank but I'm unsure of how to construct a new Bitmap object from cutting out Rectangle coordinates in the source image. I'm using <windows.h> currently and trying to find a simple way to accomplish this. I'm looking for something like this:
HBITMAP* twoOfHearts = CutOutFromImage(sourceImage, new Rectangle(0, 0, 76, 116));
From source: http://i.stack.imgur.com/WZ9Od.gif
Here's a function I played with the other week for more-or-less this same task. In my case, I wanted to return a HBRUSH that could be used with FillRect. In that instance, we still need to create a bitmap of the area of interest, before then going on to create a brush from it.
In your case, just return the dstBmp instead. spriteSheet is a global that has had a 256x256 spritesheet loaded. I've hardcoded the size of my sprites to 16x16, you'd need to change that to something like 81x117.
Here's some code that grabs a copy of the required area and some more that uses these 'stamps' to draw a level map. That said - there are all kinds of problems with this approach. Speed is one, excessive work is another one that impacts on the first. Finally, scrolling a window drawn like this produces artefacts.
// grabs a 16x16px section from the spriteSheet HBITMAP
HBRUSH getSpriteBrush(int col, int row)
{
HDC memDC, dstDC, curDC;
HBITMAP oldMemBmp, oldDstBmp, dstBmp;
curDC = GetDC(HWND_DESKTOP);
memDC = CreateCompatibleDC(NULL);
dstDC = CreateCompatibleDC(NULL);
dstBmp = CreateCompatibleBitmap(curDC, 16, 16);
int xOfs, yOfs;
xOfs = 16 * col;
yOfs = 16 * row;
oldMemBmp = (HBITMAP)SelectObject(memDC, spriteSheet);
oldDstBmp = (HBITMAP)SelectObject(dstDC, dstBmp);
BitBlt(dstDC,0,0,16,16, memDC, xOfs,yOfs, SRCCOPY);
SelectObject(memDC, oldMemBmp);
SelectObject(dstDC, oldDstBmp);
ReleaseDC(HWND_DESKTOP, curDC);
DeleteDC(memDC);
DeleteDC(dstDC);
HBRUSH result;
result = CreatePatternBrush(dstBmp);
DeleteObject(dstBmp);
return result;
}
void drawCompoundSprite(int x, int y, HDC paintDC, char *tileIndexes, int numCols, int numRows)
{
int mapCol, mapRow;
HBRUSH curSprite;
RECT curDstRect;
for (mapRow=0; mapRow<numRows; mapRow++)
{
for (mapCol=0; mapCol<numCols; mapCol++)
{
int curSpriteIndex = tileIndexes[mapRow*numCols + mapCol];
int spriteX, spriteY;
spriteX = curSpriteIndex % 16;
spriteY = curSpriteIndex / 16;
curDstRect.left = x + 16*mapCol;
curDstRect.top = y + 16 * mapRow;
curDstRect.right = curDstRect.left + 16;
curDstRect.bottom = curDstRect.top + 16;
curSprite = getSpriteBrush(spriteX, spriteY);
FillRect(paintDC, &curDstRect, curSprite);
DeleteObject(curSprite);
}
}
}
The latter function has since been replaced with the following:
void drawCompoundSpriteFast(int x, int y, HDC paintDC, unsigned char *tileIndexes, int numCols, int numRows, pMapInternalData mData)
{
int mapCol, mapRow;
HBRUSH curSprite;
RECT curDstRect;
HDC memDC;
HBITMAP oldBmp;
memDC = CreateCompatibleDC(NULL);
oldBmp = (HBITMAP)SelectObject(memDC, mData->spriteSheet);
for (mapRow=0; mapRow<numRows; mapRow++)
{
for (mapCol=0; mapCol<numCols; mapCol++)
{
int curSpriteIndex = tileIndexes[mapRow*numCols + mapCol];
int spriteX, spriteY;
spriteX = curSpriteIndex % mData->tileWidth;
spriteY = curSpriteIndex / mData->tileHeight
// Draw sprite as-is
// BitBlt(paintDC, x+16*mapCol, y+16*mapRow,
// mData->tileWidth, mData->tileHeight,
// memDC,
// spriteX * 16, spriteY*16,
// SRCCOPY);
// Draw sprite with magenta rgb(255,0,255) areas treated as transparent (empty)
TransparentBlt(
paintDC,
x+mData->tileWidth*mapCol, y+mData->tileHeight*mapRow,
mData->tileWidth, mData->tileHeight,
memDC,
spriteX * mData->tileWidth, spriteY*mData->tileHeight,
mData->tileWidth, mData->tileHeight,
RGB(255,0,255)
);
}
}
SelectObject(memDC, oldBmp);
DeleteObject(memDC);
}

Winapi get string width in pixels

I'm trying to create a method that gives me the width of a string in pixels.
My code so far:
inline void getTextWidth(HWND hwnd char* text) {
SIZE textSize;
GetTextExtentPoint32(GetDC(hwnd), text, strlen(text), &textSize);
return ?;
}
I know that I should use LPtoDP (MSDN), but at wants points as parameters and not the SIZE that GetTextExtentPoint32 returns.
How do I convert this?
The SIZE structure contains both a height and a width. Since you only care about the width, you apparently want LPtoDP(textSize.cx);.
I solved it using another method. For everyone who is interested, this is my solution:
int getStringWidth(char *text, HFONT font) {
HDC dc = GetDC(NULL);
SelectObject(dc, font);
RECT rect = { 0, 0, 0, 0 };
DrawText(dc, text, strlen(text), &rect, DT_CALCRECT | DT_NOPREFIX | DT_SINGLELINE);
int textWidth = abs(rect.right - rect.left);
DeleteDC(dc);
return textWidth;
}

MFC C++ Screenshot

I have an application that has drawn a grid using CDC (it has text, rectangle, and bitmaps). I want to take a screenshot of that finished grid when it is saved and use that screenshot as a "preview" for the file.
How can I take a screenshot of my application and save it?
Thank you,
Answer is here
void CScreenShotDlg::OnPaint()
{
// device context for painting
CPaintDC dc(this);
// Get the window handle of calculator application.
HWND hWnd = ::FindWindow( 0, _T( "Calculator" ));
// Take screenshot.
PrintWindow( hWnd,
dc.GetSafeHdc(),
0 );
}
Ultimately I ended up doing it this way because I wanted to capture even the hidden parts of the window (since the content extends beyond the screen and requires scrolling):
CDC* WindowToCaptureDC = AfxGetMainWnd()->GetWindowDC();
CDC CaptureDC;
CDC MemDC;
MemDC.CreateCompatibleDC(WindowToCaptureDC);
CaptureDC.CreateCompatibleDC(WindowToCaptureDC);
CBitmap CaptureBmp;
CBitmap ResizeBmp;
int pWidth = grid.tableWidth + grid.marginLeft*2;
int pHeight = grid.tableHeight + grid.marginBottom;
CaptureBmp.CreateCompatibleBitmap( WindowToCaptureDC, pWidth, pHeight);
CaptureDC.SelectObject(&CaptureBmp);
CBrush brush(RGB(255, 255, 255));
CaptureDC.SelectObject(&brush);
CaptureDC.Rectangle(0, 0, pWidth, pHeight);
///Drew items into CaptureDC like I did for OnDraw HERE///
double width = //desired width;
double height = //desired width;
//maintain aspect ratio
if(pWidth!=width || pHeight!=height)
{
double w = width/pWidth;
double h = height/pHeight;
if(w < h)
height = height*w;
else
width = width*h;
}
ResizeBmp.CreateCompatibleBitmap(WindowToCaptureDC, width, height);
MemDC.SelectObject(&ResizeBmp);
MemDC.StretchBlt(0, 0, width, height, &CaptureDC, 0, 0, pWidth, pHeight, SRCCOPY);
CImage TempImageObj;
TempImageObj.Attach((HBITMAP)ResizeBmp.Detach());
CString filePath = _T("LOCATION\\image.bmp");
TempImageObj.Save(filePath);

Obtaining kerning information

How can I obtain kerning information for GDI to then use in GetKerningPairs? The documentation states that
The number of pairs in the lpkrnpair array. If the font has more than
nNumPairs kerning pairs, the function returns an error.
However, I do not know how many pairs to pass in, and I don't see a way to query for it.
EDIT #2
Here is my fill application that I have also tried, this is always producing 0 for any font for the number of pairs. GetLastError will always return 0 also.
#include <windows.h>
#include <Gdiplus.h>
#include <iostream>
using namespace std;
using namespace Gdiplus;
int main(void)
{
GdiplusStartupInput gdiplusStartupInput;
ULONG_PTR gdiplusToken;
GdiplusStartup(&gdiplusToken, &gdiplusStartupInput, NULL);
Font* myFont = new Font(L"Times New Roman", 12);
Bitmap* bitmap = new Bitmap(256, 256, PixelFormat32bppARGB);
Graphics* g = new Graphics(bitmap);
//HDC hdc = g->GetHDC();
HDC hdc = GetDC(NULL);
SelectObject(hdc, myFont->Clone());
DWORD numberOfKerningPairs = GetKerningPairs(hdc, INT_MAX, NULL );
cout << GetLastError() << endl;
cout << numberOfKerningPairs << endl;
GdiplusShutdown(gdiplusToken);
return 0;
}
EDIT
I tried to do the following, however, it still gave me 0.
Font* myFont = new Font(L"Times New Roman", 10);
Bitmap* bitmap = new Bitmap(256, 256, PixelFormat32bppARGB);
Graphics* g = new Graphics(bitmap);
SelectObject(g->GetHDC(), myFont);
//DWORD numberOfKerningPairs = GetKerningPairs( g->GetHDC(), -1, NULL );
DWORD numberOfKerningPairs = GetKerningPairs( g->GetHDC(), INT_MAX, NULL );
The problem lies in the fact that you are passing in a Gdiplus::Font and not a HFONT for SelectObject. You need to convert Font* myFont into a HFONT, then pass that HFONT into SelectObject.
First, to convert a Gdiplus::Font into a HFONT, you need to get the LOGFONT from the Gdiplus::Font. Once you do this, the rest is simple. The working solution to get number of kerning pairs is
Font* gdiFont = new Font(L"Times New Roman", 12);
Bitmap* bitmap = new Bitmap(256, 256, PixelFormat32bppARGB);
Graphics* g = new Graphics(bitmap);
LOGFONT logFont;
gdiFont->GetLogFontA(g, &logFont);
HFONT hfont = CreateFontIndirect(&logFont);
HDC hdc = GetDC(NULL);
SelectObject(hdc, hfont);
DWORD numberOfKerningPairs = GetKerningPairs(hdc, INT_MAX, NULL );
As you can tell, the only functional change I gave was to creating a FONT.
You first call it with the third parameter set to NULL, in which case it returns the number of kerning pairs for the font. You then allocate memory, and call it again passing that buffer:
int num_pairs = GetKerningPairs(your_dc, -1, NULL);
KERNINGPAIR *pairs = malloc(sizeof(*pairs) * num_pairs);
GetKernningPairs(your_dc, num_pairs, pairs);
Edit: I did a quick test (using MFC by not GDI+) and got what seemed like reasonable results. The code I used was:
CFont font;
font.CreatePointFont(120, "Times New Roman", pDC);
pDC->SelectObject(&font);
int pairs = pDC->GetKerningPairs(1000, NULL);
CString result;
result.Format("%d", pairs);
pDC->TextOut(10, 10, result);
This printed out 116 as the result.