Unicode Input Handling in Games - c++

I have a game that requires me to allow players to chat with each other via network. All is well, except the part where players can type in Unicode input.
So, the question can be split into two parts:
When players type, how do I capture input? I have done this before via the game input handling (polling), however, it is not as responsive as something like Windows Forms.
After I capture input into a string, how do I output it using TrueType Fonts? The reason I ask this is because usually, I would build bitmap fonts at the start of the game from the all the text used in the game. But with unicode input, there are nearly 10k characters that are needed, which is quite impossible to build at the start of the game.
P.S. My target input languages are more specific to Chinese, Korean and Japanese.

For Input
Use SDL_EnableUNICODE to enable unicode input handling
Receive the SDL_KeyboardEvent as usual
Use the unicode member of SDL_keysym to get the unicode
For Rendering
If the needed font size is reasonably small, say 16px, you actually could just render it all to a single texture, you can fit a minimum of 4096 glyphs on 1024x1024 texture at that size, a bit more when you pack them tightly (see fontgen for example code). That should be enough for common chat, but not enough to fit all the glyphs of a TTF file.
If you don't want to use a larger texture size you have to generate the fonts on demand, to do that just create the Texture's as usual and then use glTexSubImage2D to upload new glyphs to the texture.
Another alternative is to not use textures for glyphs, but for the text itself. That way you bypass all the trouble that glyph generation produces. But its probably not such a good idea for non-static editable text.

When players type, how do I capture
input?
That depends on what you use I guess. I'm not familiar with SDL. On Linux you can use standard X functions and event loop, it works well (used in Quake for example; so it should be reactive enough).
After I capture input into a string, how do I output it using TrueType Fonts?
You should have a look at FreeType2 library. It lets you load TrueType fonts, and retrieve the glyph (the image) of any character.
But with unicode input, there are nearly 10k characters that are needed, which is quite impossible to build at the start of the game.
I have the same problem. I guess a cache manager with MRU (most recently used) characters would do the trick. I bit more complicated than simple static bitmap though.

Here is some code showing how to capture keyboard input with SDL.
First of all you need to query key input from SDL by calling EventPoll.
You can do that whenever you are ready to process input, or regularly in
a fixed interval and store keys and keyboard status in internal tables.
void EventPoll (ulong mask)
{
SDL_Event event;
while (SDL_PollEvent (&event)) {
switch(event.type) {
case SDL_KEYDOWN:
KeyHandler (reinterpret_cast<SDL_KeyboardEvent*> (&event));
break;
case SDL_KEYUP:
KeyHandler (reinterpret_cast<SDL_KeyboardEvent*> (&event));
break;
// handle other events
}
}
}
void KeyHandler (SDL_KeyboardEvent *event)
{
SDLKey keySym = event->keysym.sym;
wchar_t unicode = event->keysym.unicode;
int keyState = (event->state == SDL_PRESSED);
// process key info, e.g. put key into a buffer and
// store keyboard state
}
Here is a link to a document describing methods to render text with OpenGL: http://www.opengl.org/resources/features/fontsurvey/
What you may want to do is to capture keyboard input and render it on the fly using the proper font(s) you have preloaded.

I've done no game development myself, so I have just a vague idea of how things work there, but here are my 2 cents:
Don't cache all the glyphs at the start of the program. Instead, when you have to display a chat string, render the whole string on-the-fly to some new texture. Keep this texture in memory until a time when it is unlikely that it will be needed again (say, after the chat window is closed). Perhaps you can re-render the whole chat window when it gets updated - then you would only have one texture to worry about.

As far as display goes, I've had very good luck with the caching system described in this tutorial, transliterated to C++.
For fonts, GNU Unifont has full BMP glyph coverage, available in convenient TTF form.

Related

Text deleted by backspace is not being updated on my arduino tft display (adafruit gfx library)

For university, we need to make a game in Unity that is controlled with an Arduino. My idea was a hacking game where the Arduino acts as the 'hacking device' when hacking something in the game. The arduino would have a screen and on that screen would be a basic command-line interface that lets me input simple commands to 'hack' but I've been having trouble regarding text and clearing it.
I've been able to use unity to send typed characters to the display as-well as a backspace function (pressing backspace would remove last character in the string)
I first had issue with clearing all the text when writing (calling tft.print doesn't clear any previous text). I was using fillScreen which was slow. I found out setTextColor had a second argument that let me just set all certain colored text to a different color. Setting it to black would essentially clear it.
This made it update pretty much instantly when writing to the screen but I now had a new issue, backspace would no longer would.
My understand is that when removing the character, it's color won't be updated when calling setTextColor leaving it on the screen until a restart/fillScreen is called.
I'm not really sure how to solve this and all google searches turn up little to no help.
Here's my code for updating the text:
void updateString(char c){
tft.setTextColor(WHITE,BLACK);
if(c!='<'){
//Add new character to end of string
str.concat(String(c));
}
else if(c=='<' && str.length()>2){
//Remove last character in string
str.remove(str.length()-1);
}
//Set cursor back to 0,0
tft.setCursor(0,0);
//Display text
tft.print(str);
}
I'd appreciate any help.
Maybe, use a similar function to tft.clear() each time you refresh the screen or you can draw a filled square of the background on the text so it looks like it has been erased then you rewrite the text.
Sounds like you are using proportionally-spaced fonts instead of the original classic fonts that ships with Adafruit_GFX. Historically, when using the default classic fonts one could set the background color option of the text to the same color as the background of the screen (usually black). This would overwrite the old screen contents with new data and work because the characters using the classic fonts are a uniform size. When using proportionally-spaced fonts, the background color option for the font is disabled by design.
This was presumably because of memory requirements and speed on slower AVR's. Regardless, when using proportionally-spaced fonts the background color feature won't work due to non-uniform sized characters overlapping the same regions as adjacent characters.
There are two work-arounds to this. Adafruit says in order to replace previously drawn text when using custom fonts you can either:
Use getTextBounds() to determine the smallest rectangle that encloses a
string and then erase that area using fillRect() prior to drawing
the new text. OR
Create an offscreen bitmap using GFXcanvas1 for a fixed size area,
draw the text in the GFXcanvas1 object and then copy to the screen
using drawBitmap().
Note that options 1 & 2 above are mutually exclusive. The second method requires more memory. The first method isn't perfect and produces some small amount of flicker, but is in general acceptable if coded carefully.
I hope that I have understood what the nature of your problem is and answered it in a satisfactory manner. If nothing else, at least now you know why custom font's will not work with the so called background color feature that works with the original 'classic' Adafruit fonts.
Nikki Cooper

Adjusting line height in C++

I am attempting to put the final touches on a maze program I have been writing. I have used Unicode to delimit the walls and paths however because of the (horizontal) line spacing I can't quite get it to look compact enough. I have attached two screenshots. I'm just escaping the newline "\n" in order to print each row. Can the distance between lines be adjusted or am I stuck with this "gappy" representation?
My output:
What I am trying to closely represent:
Assuming you aren't printing double newlines, this is outside the scope of standard C++, it does not provide facilities for controlling terminal in a standard way.
Solutions:
You could provide a launcher script, which opens a new terminal window with specific font and runs your app in it.
You could use some platform specific method to change background color (ANSI codes work in unixy terminals, or use Win32 API for Windows terminal, ncurses library on Unix-like environments) and print just spaces in different colors.
Use a GUI library/framework to get complete control on what is drawn (I'd use Qt for C++ GUI app).
TBH if you want pixel-accurate rendering use a proper rendering API, such as OpenGL.
From a text rendering point of view you don't say what you are rendering to. Assuming something like a terminal console or shell window then the layout beyond characters and newlines is nothing to do with your program; the visual representation is entirely determined by the shell you are rendering to.
Firstly, check that you are genuinely printing a line per maze scan line, and not interleaving with spurious newlines. Assuming that is ruled out, the problem is that the unicode glyph is not a full block. So you must somehow set the font or choose another glyph which is a full block.
Usually console windows are 80 characters wide by 22 or 24 characters high, and characters are 8 pixels wide by 19 pixels high. So it's very far from a square grid, and you might want to bias the maze to reflect that and provide a better visual appearance (eg make 2 pixel-wide vertical corridors much more common than 2-pievel wide horizontal corridors).
Do check the binary image library fonts, you might find them useful.
https://github.com/MalcolmMcLean/binaryimagelibrary

Print chess unicode characters in C++, and make characters square sized

I'd like to ask what's the simplest way of writing the chess unicode characters in a console window in C++? (♙♘♗♖♕♔♟♞♝♜♛♚) They are part of the "Miscellaneous Symbols" block in unicode. https://en.wikipedia.org/wiki/Chess_symbols_in_Unicode
I also want to print characters with square size, right now my chess board is not square, because each character is a rectangle and not a square.
It'd also be good to be able to write with ordinary non-square characters below the chess board, but that might be impossible? To mix different fonts/formattings in the same console window?
Ok, thanks in advance! :)
The first part of your question, outputting those characters, is platform-dependent. Linux consoles often use UTF-8, if that is the case you can encode the characters as UTF-8 and write them to standard output. On Windows you can use the Console API (the WriteConsole function):
HANDLE handle = GetStandardHandle(STD_OUTPUT_HANDLE);
DWORD written = 0;
// explicitly call the wide version (which always accepts UTF-16)
WriteConsoleW(handle, L"\u2658", 1, &written, NULL);
One caveat which is hard to work around is that you need a console font containing those characters.
For getting square cells, this is dependent on a lot of specifics about the way the console renders text. If it uses font substitution, then there is a chance the text will not actually be monospaced.
Now, if you have a console font with these characters, and if that font is monospaced, then you may be able to draw a square board by adding some spacing between the characters. You can use block elements like ▌ U+258C — LEFT HALF BLOCK to draw the chequerboard: ▌♘▐█▌ ▐.

OS native 2D API vs OpenGL

Suppose I wanted to create a text editor from scratch.
I searched around and everyone suggested using OS-specific native 2D APIs (e.g. GDI+ in Windows or XLib in Linux), especially for font rendering.
My question is: why is it that openGL isn't suited for such a task? Why is it so hard to render antialiased text and controls as in a text editor with openGL and why should I prefer the non-portable way of native 2D OS APIs?
Part of the difficulty is that OpenGL doesn't provide a font engine or any capabilities specifically for rendering text. In other words, it's not a matter of OpenGL being poorly suited to the rendering part of the task, just that OpenGL is missing a lot of pieces necessary to the task.
To render text under OpenGL, you'd typically start with some font engine to take (for example) a TrueType or OpenType font, and render a glyph from its info (e.g., FreeType). Then you need a text display engine to figure out how to render characters to display your strings decently. In a simple case like English, it has to handle things like kerning and leading. In a complex case like some Arabic scripts, you basically need kind of a feedback loop between the text rendering and the font rendering, because a glyph can take a different form depending on its context in the string.
In short, writing a text editor that renders its text via OpenGL means re-building a lot of a text rendering stack from the ground up.
If you don't care a lot about rendering quality, you might be able to get by with just rendering a few fonts to bitmaps, and displaying your text using them. This can simplify the code quite a bit, but a simple implementation will mean producing output that looks something like an MS-DOS command line. Even matching the output quality of, say, Windows 3.0 will take a fair amount of work. That's not to say it can't be done--but it could dwarf the difficulty of writing the editing part of the text editor.

Decoding Microsoft True Type Font Files

I am working on an embedded platform (STM32F407) with a TFT LCD as a display (480x800px) and would like to make my user interface somewhat customizable to the end user. I figured the best source of fonts would be windows compatible as their the most common.
My current implementation uses my own custom drawn font in a binary format and a descriptor table giving the character width and ascii value but having to draw my own font bit by bit is tedious.
I would like to read in a True Type Font file from an SD card and be able to use the different sized glyphs inside it but I have not seen a strait forward implementation on how to actually achieve this magic. Can somebody point me to a good c/c++ example of what I am looking for?
Even better as a way to iron out the kinks I would like to make a simple gcc command line program that will print out my input with a selected font using '#' as pixels. That way I can just worry about implementation and not any other random bugs that might pop up.
Can anybody help me out?
Perhaps you can use the Freetype library.
As duskwuff says: TTF is primarily a vector format, would need to write a renderer. Better off using an image file to define the font, or using a bitmap font format like FNT (Windows) or BDF (UNIX).
Here is my answer to my own question: AngelCode's BMFont & Useage. This makes choosing selective characters from the installed char set, mix in a font and exports an image with a map file to each character. Simple to use.