Directx9 Font Update with int to LPCTSTR conversion - c++

I'm trying to show the score on the screen. Following code works fine:
g_Font = NULL;
D3DXFONT_DESC f = {fontSize,
0,
400,
0,
false,
DEFAULT_CHARSET,
OUT_TT_PRECIS,
CLIP_DEFAULT_PRECIS,
DEFAULT_PITCH,
fontName};
fontDesc = f;
fontPosition.top = top;
fontPosition.left = left;
fontPosition.right = right;
fontPosition.bottom = bottom;
text = t;
D3DXCreateFontIndirect(device,&fontDesc,&g_Font);
Following part is rendered for each frame:
g_Font->DrawText(NULL,
text,
-1,
&fontPosition,
DT_CENTER,
0xffffffff); //draw text
What I want to do is, update the text during runtime. I simply update the text variable since drawing code runs for each frame, but it doesn't work. A simple text works but following construction doesn't work:
const size_t buflen = 100;
TCHAR buf[buflen];
_sntprintf(buf, buflen - 1, TEXT("Point: %d"), point);
text = (LPCTSTR)buf;
I tried almost every solution I could have found online, but they don't work. I can see that the integer is converted successfully, but there are absurd characters in the following rendering. Any solutions?

The code you posted is incomplete, but I will still try to provide you with an answer that should help you solve the problem.
The first issue is I think you're mixing concepts and you're handling DrawText like if it was a UI element of a WinForms or something. Everytime you want to update the text, DrawText needs to be called. It doesn't store a pointer to the buffer you pass and automatically update the text when this buffer is changed, see documentation here. If you wanted to add a "label wrapper", you will need to have it call DrawText, most likely in a Render method, between calls to BeginScene and EndScene.
As for the absurd characters, if your string is converted properly, are you sure you Clear your RenderTarget before drawing again on it?

Related

How to efficiently render a small sprite in Direct3D / C++ on a large Window (DWM)?

I'm implementing a custom cursor in DirectX/C++ that is drawn on a transparent window on top of the desktop.
I have stripped it down to a basic example. The magic of executing Direct3D on the DWM is based on this article on Code Project
The problem is that when using a very big window (e.g. 2560x1440) as a base for the DirectX rendering, it will give up to 40% GPU Load according to GPU-Z. Even if the only thing I am displaying is a static 128x128 sprite, or nothing at all. If I use an area like 256x256, the GPU Load is around 1-3%.
Basically this loop would make the GPU go crazy on a big window while it's smooth sailing on a small window:
while(true) {
g_pD3DDevice->PresentEx(NULL, NULL, NULL, NULL, NULL);
Sleep(10);
}
So it seems like it re-renders the whole screen whether anything changes or not, am I right? Can I tell Direct3D to only re-render specific parts that needs to be updated?
EDIT:
I have found a way to tell Direct3D to render a specific part by providing RGNDATA Dirty region information to PresentEx. It is now 1% GPU Load instead of 20-40%.
std::vector<RECT> dirtyRects;
//Fill dirtyRects with previous and new cursor boundaries
DWORD size = dirtyRects.size() * sizeof(RECT)+sizeof(RGNDATAHEADER);
RGNDATA *rgndata = NULL;
rgndata = (RGNDATA *)HeapAlloc(GetProcessHeap(), 0, size);
RECT* pRectInitial = (RECT*)rgndata->Buffer;
RECT rectBounding = dirtyRects[0];
for (int i = 0; i < dirtyRects.size(); i++)
{
RECT rectCurrent = dirtyRects[i];
rectBounding.left = min(rectBounding.left, rectCurrent.left);
rectBounding.right = max(rectBounding.right, rectCurrent.right);
rectBounding.top = min(rectBounding.top, rectCurrent.top);
rectBounding.bottom = max(rectBounding.bottom, rectCurrent.bottom);
*pRectInitial = dirtyRects[i];
pRectInitial++;
}
//preparing rgndata header
RGNDATAHEADER header;
header.dwSize = sizeof(RGNDATAHEADER);
header.iType = RDH_RECTANGLES;
header.nCount = dirtyRects.size();
header.nRgnSize = dirtyRects.size() * sizeof(RECT);
header.rcBound.left = rectBounding.left;
header.rcBound.top = rectBounding.top;
header.rcBound.right = rectBounding.right;
header.rcBound.bottom = rectBounding.bottom;
rgndata->rdh = header;
// Update display
g_pD3DDevice->PresentEx(NULL, NULL, NULL, rgndata, 0);
But it's something I do not understand. It will only give 1% GPU Load if I add the following
SetLayeredWindowAttributes(hWnd, 0, 180, LWA_ALPHA);
I want it transparent anyway so it's good, but instead I get some weird tearing effects after a while. It is more noticeable the faster I move the cursor. What does that come from? It looks like image provided. I am sure I have set the dirty rects perfectly accurate.
The above tearing seem to differ from computer to computer.

MFC BitBlt and SetDIBits vs. SetBitmapBits

I have a bitmap stored as a BGRA array of bytes. This is the code I've been using to paint the bitmap:
CDC *dispDC = new CDC();
dispDC->CreateCompatibleDC(pDC);
CBitmap *dispBMP = new CBitmap();
dispBMP->CreateCompatibleBitmap(pDC, sourceImage->GetWidth(), sourceImage->GetHeight());
dispDC->SelectObject(this->dispBMP);
The actual copying of the pixels in the translatedImage array happens with this:
dispBMP->SetBitmapBits(sourceImage->GetArea() * 4, translatedImage);
Then after some more processing I call pDC->StretchBlt with dispDC as the source CDC. This works fine when logged in locally because the display is also set to 32bpp.
Once I log in with Remote Desktop, the display goes to 16bpp and the image is mangled. The culprit is SetBitmapBits; i.e. for it to work, I have to properly fill translatedImage with the 16bpp version of what I want to show. Rather than do this myself, I searched the documentation and found SetDIBits which sounds like it does what I want:
The SetDIBits function sets the pixels in a compatible bitmap (DDB) using the color data found in the specified DIB.
In my case, the DIB is the 32bpp RGBA array, and the DDB is dispBMP which I create with CreateCompatibleBitmap.
So instead of my call to SetBitmapBits, this is what I did:
BITMAPINFO info;
ZeroMemory(&info, sizeof(BITMAPINFO));
info.bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
info.bmiHeader.biBitCount = 32;
info.bmiHeader.biPlanes = 1;
info.bmiHeader.biCompression = BI_RGB;
info.bmiHeader.biSizeImage = sourceImage->GetArea()*4;
info.bmiHeader.biWidth = sourceImage->GetWidth();
info.bmiHeader.biHeight = sourceImage->GetHeight();
info.bmiHeader.biClrUsed = 0;
int r = SetDIBits(pDC->GetSafeHdc(), (HBITMAP)dispBMP,
0, sourceImage->GetHeight(), translatedImage,
&info, DIB_PAL_COLORS);
However, r is always zero and, naturally, I get nothing but black in my window. What is wrong with the code?
According to the documentation for SetDIBits:
The bitmap identified by the hbmp parameter must not be selected into a
device context when the application calls this function.
In your example code you select it into device context after creating it, so presumably that's why SetDIBits is failing.
Ross Ridge was correct in pointing out the code order mistake. However, this didn't solve the problem.
The problem was in the parameters I was passing. I am new to C++ and MFC and often forget all the "operators" which can act on types to automatically convert them.
Previously I had this:
int r = SetDIBits(pDC->GetSafeHdc(), (HBITMAP)dispBMP,
0, sourceImage->GetHeight(), translatedImage,
&info, DIB_PAL_COLORS);
The correct call is this:
int r = SetDIBits(*pDC, *dispBMP,
0, sourceImage->GetHeight(), translatedImage,
&info, DIB_PAL_COLORS);
(Note I pass dereferenced pointers in the first two parameters.) Everything else was correct, including the counter-intuitive DIB_PAL_COLORS flag for a bitmap which has not palette.
After obviously missing some key points in the documentation I reread it and then found this which has sample code showing that I was simply passing the parameters incorrectly.

SDL Resetting Surfaces

I draw some text to a surface (using SDL_ttf) and then I want to change the text on the surface. If I just redraw the surface the text does not go away. I have looked at several forum posts on how to fix the problem but I just cannot seem to figure it out. In particular I cannot understand why this solution does not work: (code is long so this just gives the essentials)
In Class file declared:
SDL_Surface* box; // These two are initialised to the
SDL_Surface* boxCopy; // same image
At the start of my render function:
*box = *boxCopy; \\Reset box surface
My understanding of pointers and C++ (which is admittedly limited) suggests that this should make the surface pointed at by box equal to the surface pointed at by boxCopy. Instead the boxCopy surface becomes a copy of box. I have no idea how boxCopy can be changed by this line of code but it seems like that is what is happening.
I'm not sure i completely understand your problem but hopefully this can help.. It's easier to update the text whenever the surface it's drawn on is to be updated rather than updating it whenever the actual text is updated. It might not be as optimized performance wise but i would say it's easier in most cases.
A typical program loop would include a re-rendering of a surface representing the screen followed by an SDL_Flip of this surface. You can of course optimize your re-rendering so you only render what has actually been updated since last frame. Is that what you're working on perhaps? If so, and if you use the method below you should be aware that the new text only covers the size of the new text and not the entire old text. I usually solve this by first drawing a filled rectangle and then the new text.
Here is a TTF example showing how text can be drawn on a surface (here called m_Screen, which is the surface flipped to screen every frame) in the simple case where i have one background color only:
void drawText(const char* string, int x, int y,
int fR, int fG, int fB, int bR, int bG, int bB)
{
SDL_Color foregroundColor = { fR, fG, fB };
SDL_Color backgroundColor = { bR, bG, bB };
SDL_Surface* textSurface = TTF_RenderText_Shaded(m_Font, string,
foregroundColor,
backgroundColor);
SDL_Rect textLocation = { x, y, 0, 0 };
SDL_BlitSurface(textSurface, NULL, m_Screen, &textLocation);
SDL_FreeSurface(textSurface);
}
Notice that this has been done before calling drawText (with some suitable font size):
m_Font = TTF_OpenFont("arial.ttf", size);
And this is done at cleanup:
TTF_CloseFont(m_Font);

How to get text width (DirectX, C++)

I'm working on a GUI-Project with d3d9 & d3dx9. I create fonts using the D3DXCreateFont function (C++). Everything is working fine. But I need a function that determines the width in pixel for a specific text.
Something like this:
char text[64] = "Heyho - Blablabla";
GUIFont* hFont = gui->fonts->CreateFont("DefaultFont", "Arial", 18);
int width = hFont->GetTextWidth(text);
[...]
The part with gui->fonts->CreateFont is all working fine. It's my way of creating and storing fonts. Ignore that part, it's all about the GetTextWidth. My CreateFont function initalizes the GUIFont object. The actual D3D-Font is stored in a LPD3DXFONT.
I really hope you can help me with this one, I am pretty sure it's possible - I just don't know how. Thanks for reading, and I hope you have a clue.
You can use the DT_CALCRECT flag on the ID3DXFont function DrawText to return the required size of the rectangle enclosing the text.
So, if you want to get just the width, you might have a function something like this:
int GetTextWidth(const char *szText, LPD3DXFont pFont)
{
RECT rcRect = {0,0,0,0};
if (pFont)
{
// calculate required rect
pFont->DrawText(NULL, szText, strlen(szText), &rcRect, DT_CALCRECT,
D3DCOLOR_XRGB(0, 0, 0));
}
// return width
return rcRect.right - rcRect.left;
}
Obviously, you can also extract the height too if you need it.

Getting a bitmap to change colour when a limit is reached

Okay I am having some problems with being able to change bitmaps when a certain parameter is greater than another. I am a massive newbie to this and my coding isn't great (at all). I have code that reads the limits (parameters) and displays as text which is this:
CFont* def_font = argDC->SelectObject(&m_Font);
CString csText;
int StartPos = WindowRect.Width()/5;
CRect TextRect(StartPos, WindowRect.top + 5, StartPos + 100, WindowRect.top + 35);
csText.Format(_T("%.2ft"), argSystemDataPtr->GetMaxSWL());
int32_t iSWLDigits = csText.GetLength();
if (iSWLDigits < m_SWLDigitsNum)
{
m_RedPanelBitmap.LoadBitmapW(IDB_BITMAP_PANEL_RED);
//argDC->FillSolidRect(TextRect, RGB(255, 255, 255));
}
m_SWLDigitsNum = iSWLDigits;
argDC->DrawText(csText, TextRect, DT_LEFT);
The bitmaps that are usually displayed are green but if a limit is breached like the one above then I want the bitmap to change to a red one. Here is what I've got for the green ones.
CRect PanelRect1, PanelRect2;
CRect PanelsRect(WindowRect);
const int BarHeight = 30;
PanelsRect.OffsetRect(0,m_bShowTitleBar?BarHeight:-BarHeight);
PanelsRect.DeflateRect(0,m_bShowTitleBar?BarHeight*-1:BarHeight);
m_GreenPanelBitmap.Detach();
m_GreenPanelBitmap.LoadBitmapW(IDB_BITMAP_PANEL_GREEN);
CBitmap* pOld = memDC.SelectObject(&m_GreenPanelBitmap);
BITMAP bits;
m_GreenPanelBitmap.GetObject(sizeof(BITMAP),&bits);
PanelRect1.SetRect(0,PanelsRect.top, PanelsRect.right /2 , PanelsRect.Height()/3);
PanelRect2.SetRect(0,PanelsRect.top+PanelRect1.Height(), PanelsRect.right /2 ,(PanelsRect.Height()/3) + PanelRect1.Height());
//Now draw the Panels
if (pOld != NULL)
{
argDC->StretchBlt(PanelRect1.left ,PanelRect1.top,PanelRect1.Width(),PanelRect1.Height(),
&memDC,0,0,bits.bmWidth-1, bits.bmHeight-1, SRCCOPY);
argDC->StretchBlt(PanelRect2.left,PanelRect2.top,PanelRect2.Width(),PanelRect2.Height(),
&memDC,0,0,bits.bmWidth-1, bits.bmHeight-1, SRCCOPY);
memDC.SelectObject(pOld);
I would be extremely grateful for any help, I understand there probably is a simple answer but I've been scratching my head over it and can't seem to find an answer anywhere else on how change the m_GreenPanelBitmap to m_RedPanelBitmap when this statement is true.
`if (iSWLDigits < m_SWLDigitsNum).`
Well, I do think your question is a bit messy but...
On the second code snippet you posted (I suppose from a OnPaint method in a dialog) you are displaying the green bitmap by using StretchBlt.
If your problem is you need to display one bitmap or another depending on a condition you should load both images (maybe you can do that elsewhere to avoid loading the images everytime the dialog is painted) and then display the one you really need based on the condition. Something like that:
bool bCondition = /*whatever*/
m_GreenPanelBitmap.LoadBitmapW(IDB_BITMAP_PANEL_GREEN);
m_RedPanelBitmap.LoadBitmapW(IDB_BITMAP_PANEL_RED);
CBitmap* pBitmapToDisplay = bCondition ? &m_GreenPanelBitmap : &m_RedPanelBitmap;
CBitmap* pOld = memDC.SelectObject(pBitmapToDisplay);
BITMAP bits;
pBitmapToDisplay->GetObject(sizeof(BITMAP),&bits);
PanelRect1.SetRect(0,PanelsRect.top, PanelsRect.right /2 , PanelsRect.Height()/3);
PanelRect2.SetRect(0,PanelsRect.top+PanelRect1.Height(), PanelsRect.right /2, PanelsRect.Height()/3) + PanelRect1.Height());
argDC->StretchBlt(PanelRect1.left ,PanelRect1.top,PanelRect1.Width(),PanelRect1.Height(),
&memDC,0,0,bits.bmWidth-1, bits.bmHeight-1, SRCCOPY);
memDC.SelectObject(pOld);
Maybe with a more detailed question we would be able to help you more.