So I'm building this game engine thinggy, and found it to be VERY hard to create some kind of an overlay with debug information into the main game window with D3D11 or at all draw text, so I thought I'd create an other window to contain my debug data.
I got the window created fine and all, but I have no idea how to write my debug info into it. I do not want to use the windows form designer as that would have to convert my project into a CLR project which I do not want.
I have been googling now for 3 hours at least (honest) and tried various solutions but none of them really seemed practical to use/they were not working.
The debug info I'd like to write originates from global float values. An example would be CAM_POS_X which holds a floating point value which indicates at which X co ordinate the camera is currently at.
Something like this is desired:
|SiriusAlpha 0.1 Debug window_ |
|Current X position: CAM_POS_X|
|Current Y position: CAM_POS_Y|
|Current Z position: CAM_POS_Z|
|Current YAW: CAM_YAW______|
|Current PITCH: CAM_PITCH___|
|Current FPS: CUR_FPS_______|
All of these values are not nescessarily floating point variables. They could be strings, doubles, integers or even booleans.
If anyone would be willing to explain to me how to do this in D3D11 and I could skip the whole debug window schenennigans I'd be even happier.
Otherways, I'd be delighted if somebody could explain to me how this is done.
Have you tried TextOut()? Read the article on msdn. You should already have a device context, the rest is quick and easy.
The TextOut function writes a character string at the specified location, using the currently selected font, background color, and text color.
Printing to a string is trivial.
wchar buf[128];
swprintf(buf, "Current X Position: %f", CAM_X_POS);
TextOut(yourDC, screenXPos, screenYPos, &buf, sizeof(buf));
I haven't tested this, but from the MSDN documentation this should work fine.
Related
currently I'm using this font in my C++ program:
-misc-fixed-medium-r-normal--12-*-*-*-*-*-iso8859-15
where '12', the size, is also the font size I'm using currently with Linux Mint 18-1.
But when I draw in my program a string it is shown very small! It looks like it has a size of '6'!
Do I need to double the font size for my program, or something like that?
TIA
Regards
Earlybite
I was searching some hours the internet, also here, but I couldn't find an solution. Also in my "pre-version" of my program, I couldn't find the difference, because *there was a normal drawing with XLib and DrawString.
I also noticed, that even size = 40 hadn't a difference to e.g. size = 20. So there had to be a difference in coding.
So I went through the pre-version code line by line and at least I found that little line: XSetFont().
Which makes drawing strings normal.
E.g. like that:
XSetFont(mDisplay, vGC, this->mFontPtr.fid); // <-- HERE!
vGCVal.foreground = mXForeColorA->X_Color.pixel;
XChangeGC(mDisplay,vGC, GCForeground, &vGCVal);
XDrawString(mDisplay, vPix, vGC, x, y, nDrawString.c_str(), (int) nDrawString.length());
EDIT: Even that the problem still exists, I haven't been able to reproduce this frequently enough to examine it closer. See more info at the end of the question.
I started to develop a game, and I am currently writing basic library for it. I'm using D programming language with SDL-2 and OpenGL 3 (using Derelict3 bindings), on Linux Mint 13 (Maya). Compiler is DMD64 D Compiler v2.067.1, and I rebuild binary each time with 'rdmd'.
To render (changing) text, I create glyphs on-demand. The piece of code I use for this is:
class Font {
...
Texture render(char c) {
if(!(c in rendered)) rendered[c] = texture(to!string(c));
return rendered[c];
}
Texture texture(string text) {
SDL_Color color={255, 255, 255, 255};
auto bitmap = TTF_RenderText_Blended(
font,
std.string.toStringz(text),
color
);
if(!bitmap) {
throw new TTFError(
"TTF_RenderText_Blended: " ~
to!string(TTF_GetError()) ~ ": '" ~ text ~ "'"
);
}
auto texture = new Texture(bitmap);
SDL_FreeSurface(bitmap);
return texture;
}
The problem is that this fails purely randomly. Sometimes it works without any problems. When it fails to render a glyph, it is interesting that it will fail to render the same glyph over and over again. Here is an example when catching the exception I throw:
...
TTF_RenderText_Blended: Text has zero width: '9'
TTF_RenderText_Blended: Text has zero width: '6'
TTF_RenderText_Blended: Text has zero width: '9'
TTF_RenderText_Blended: Text has zero width: '6'
TTF_RenderText_Blended: Text has zero width: '9'
TTF_RenderText_Blended: Text has zero width: '6'
...
(I'm printing score to screen, other numbers showing fine except those few ones). The numbers TTF_RenderText_Blended fails to render vary from run to run, and as mentioned, time to time it renders all the numbers.
One detail is that static strings I render before entering game loop have not yet failed to render, just single letters I use for changing texts.
I'm pretty much out of any ideas, and haven't found anything related to this problem by searching web. Any ideas to look for solutions are very well appreciated.
CURRENT SITUATION: I updated compiler to DMD 2.067.1 and the problem remains (compilers used so far: 2.066.1, 2.067.1). The whole - lets say project family is in the github at the moment:
https://github.com/mkoskim/games
The text glyph rendering function is located in this file:
https://github.com/mkoskim/games/blob/master/engine/ext/font.d
...and it is used from here:
https://github.com/mkoskim/games/blob/master/engine/ext/gui/label.d
The problem occurs mainly/most frequently in the pacman game (although very seldomly just right now):
https://github.com/mkoskim/games/tree/master/testbench/pacman
If you want to try it out, first, read the (hopefully complete enough) installation instructions:
https://github.com/mkoskim/games/blob/master/INSTALL
The project is made for 64-bit Linux Mint Maya, and it is currently not that user friendly and portable from building perspective. Pacman is the only demo that (hopefully) works without game controller. After successful installation of required libraries and tools, you can build it with command:
games/testbench/pacman$ make default
I ran into the exact same issue, and for me it was fixed by keeping the SDL_RWops structure used to create the font (with TTF_OpenFontRW) alive for the whole lifetime of the TTF_Font created by it. I saw you're creating the font with TTF_OpenFontRW as well so I assume this will fix it for you as well. It looks like SDL_ttf relies on this being kept alive, it reads freed memory otherwise.
I know this question is a little bit outdated but I maybe had a similar problem.
I fixed it with simply call SDL_DestroyTexture() every time befor I used TTF_Render_Text_Blended() :)
My application has two Pictures embedded in the Frame. My code is as follows:
wxMemoryInputStream istream1(Bild_png, sizeof Bild_png);
wxImage Bild_png(istream1, wxBITMAP_TYPE_PNG);
new wxStaticBitmap(p_img, wxID_ANY, wxBitmap(Bild_png));
vbox->Add(p_img ,0);
(vbox is the Sizer)
When I start the App, I've a "T-" at the left-upper corner in both Bitmaps. When I change the notebookitem("screen") and get back to the first Screen (where the Bitmaps are) the "-T" has disappeared...
How can I fixed it, so that I will never see the failure?
i had to call Layout() at the upmost sizer. That has solved my problem. It means at the end:
vbox->Layout()
#catalin, I don't think that to post aorund 2000 lines of sourcecode is a better way. I had choose this little snipped, because it says all what needed. A expert with wxWidgets had give me - with this four lines - the hint that something is fault with the sizer, not with the pic.
Haven't you been advised before to search the samples? For example widgets; just grep for wxStaticBitmap and I'm sure you'll find something useful.
This is just a poor way of asking a question.
In your c++ snippet you're using Bild_png even before it was declared - really? Then you mention both Bitmaps and notebookitem("screen") which are just unknown items to anyone else but you.
IMO it is just too... wrong to receive a good answer...
This is driving me up the wall..
I've got a very simple SDL2 program.
It has a array of 3 SDL_Texture pointers.
These textures are filled as follows:
SDL_Texture *myarray[15];
SDL_Surface *surface;
for(int i=0;i<3;i++)
{
char filename[] = "X.bmp";
filename[0] = i + '0';
surface = SDL_LoadBMP(filename);
myarray[i] = SDL_CreateTextureFromSurface(myrenderer,surface);
SDL_FreeSurface(surface);
}
This works, no errors.
In the main loop (which is just a standard event loop waiting for SDL_QUIT, keystrokes and a user-event which a SDL_Timer puts in the event queue every second) I just do (for the timer triggered event):
idx = (idx+1) % 3; // idx is global var initially 0.
SDL_RenderClear(myrenderer);
SDL_RenderCopy(myrenderer, myarray[idx], NULL, NULL);
SDL_RendererPresent(myrenderer);
This works fine for 0.bmp and 1.bmp, but the 3rd image (2.bmp) simply shows as a black field.
This is structural.
If I alternate the first 2 images they are both fine.
If I alternate the 2nd and 3rd image the 3rd image doesn't show.
If I use more than 3 images then 3 and upwards show as black.
Loading order doesn't matter. It starts going wrong with the 3rd image loaded from disk.
All images are properly formatted BMP's.
I even saved 2.bmp back to disk under a different name by using SDL_SaveBMP() after it was loaded to make sure it got loaded in memory OK. The new file is bit for bit identical to the original.
This program, without modifications and the same bmp files, works fine on OSX (XCode5) and Windows (VC++ 2012 Express).
The problem only shows on the Raspberry PI.
I have placed explicit error checks on every call that can leave a result/error-code (not shown in the samples above for brevity) but all of them show "no error".
I have used the latest stable source set of www.libsdl.org and compiled as instructed (configure, make, make install, etc.).
Anybody got any idea what could be going on ?
P.S.
Keyboard input doesn't seem to work either on my PI, but I haven't delved into that yet.
Answering myself as I finally figured it out myself...
I finally went back to the README-raspberrypi.txt that came with the SDL2 sources.
I didn't read it carefully enough the first time around...
Problem 1: I'am running on a FULL-HD display. The PI's default GPU memory is 64MB which is not enough for large displays and double-buffering. As suggested in the README I increased this to 128MB and this solved the black image problem.
Problem 2: Text input wasn't working because my user-account was not in the input group. I had added the default "pi" account to the input group initially, but when I later started using another account I forgot to add that user to the group.
In short: Caught by my own (too) quick skimming of the documentation.
What would be the easiest, most cross-platform way to create a bitmap (2D array of integers, or a quad-tree) and display it on the screen? I would also like to be able to save it as a file.
Thanks
It has to be said -- the easiest and most cross platform approach is probably to use printf, with something like:
// y and x loops would surround this...
unsigned char grayscaleValue = /* something */;
printf("%c",grayscaleValue < 128 ? " " : "X");
You could use more than two brightness values.
I also like both Qt and Juce; they're both relatively straightforward cross platform GUI toolkits. They can both be got up and running in an evening or two... the ascii printout (and its variations) can be done in an hour.