Unicode character for superscript shows a square box: ࠚ - c++

Using the following code to create a Unicode string:
wchar_t HELLO[20];
wsprintf(HELLO, TEXT("%c"), 0x2074);
When I display this onto a Win32 Control like a Text box or a button it gets mapped to a [] Square.
How do I fix this ?
I tried compiling with both Eclipse(MinGW) and Microsoft Visual C++ (2010).
Also, UNICODE is defined at the top
Edit:
I think it might be something to do with my system, because when I visit: http://en.wikipedia.org/wiki/Unicode_subscripts_and_superscripts
some of the unicode characters don't appear.

The font you are using does not contain a glyph for that character. You will likely need to install some new fonts to overcome this deficiency.
The character you have picked out is 'SAMARITAN MODIFIER LETTER EPENTHETIC YUT' (U+081A). Perhaps you were after U+2074, i.e. 'SUPERSCRIPT FOUR' (U+2074). You need hex for that: 0x2074.
Note you changed the question to read 0x2074 but the original version read 2074. Either way, if you see a box that indicates your font is missing that glyph.

The characters you are getting from Wikipedia are expressed in hexadecimal, so your code should be:
wchar_t HELLO[20];
wsprintf(HELLO, TEXT("%c"), (wchar_t)0x2074); // or TEXT('\x2074')
If it still doesn't work, it's a font problem; if you need a pan-Unicode font, it seems that Code2000 is one of the most complete out there.
Funny fact: the character that has the decimal code 2074 (i.e. hex 81a) seems to actually be a box (or it's such a strange beast that even the image outline at FileFormat.Info is wrong). :)
For the curious ones: it turns out that 0x081a is this thing:

Related

C++ char spacing in console output, UTF-16 characters

I'm making a game in C++ console using UTF-16 characters to make it little bit more interesting, but some characters are different size then others. So, when I print the level, things after character are moved further than others. Is there any way how to add spacing between characters with some console function, I try to google something helpful, but I have not found nothing.
I tried to change font size by CONSOLE_FONT_INFOEX, but it changed nothing, maybe i implement it in the wrong way, or it not work with UTF-16 characters.
// i tried this
CONSOLE_FONT_INFOEX cfi;
cfi.cbSize = sizeof(cfi);
cfi.dwFontSize.X= 24;
cfi.dwFontSize.Y= 24;
Unfortunately I expect that this will heavily depend on the particular console you're using. Some less Unicode-friendly consoles will treat all characters as the same size (possibly cutting off the right half of larger characters), and some consoles will cause larger characters to push the rest of the line to the right (which is what I see in the linked image). The most reasonable consoles I've observed have a set of characters considered "double-wide" and reserve two monospace columns for those characters instead of one, so the rest of the line still fits into the grid.
That said, you may want to experiment with different console programs. Can I assume you are on Windows? In that case, you might want to give Windows Terminal a try. If that doesn't work, there are other console programs available, such as msys's Mintty, or ConEmu.
So, after some intense googling i found the solution. And solution is fight fire with fire. Unicode include character Thin Space, that is 1/5 of the normal space, so if i include two of them with one normal space after my problematic character, the output is diplaying how i want. If anybody runs into some simliar issue, unicode have lot of different sized spaces. I found Website that shows them all of them, with their propperties.
fixed output picture

Inkscape problems with special characters

In Inkscape I recently encountered a problem with special characters.
When I want to type special characters like é, è, I get Cyrillic letters instead.
Even when I copy and paste from text (both from within the same doc as from another document) that is displayed correctly, the pasted letters convert into Cyrillic.
For instance the é turns into и.
I cannot find anything settings that could be the cause (or possible solution). Also my keyboard is not set to Russian/Cyrillic.
In the past I never had this problem. I am using Inkscape .48 on a Dutch Windows 8.1.
Some advise would be very welcome!
The problem seems to be in the font I am using, it seems to be a Cyrillic (ABengaly) font. Changing the font solves the problem
(I am surprised I did not face this problem before, apparently I managed to avoid the combo of this font and the French language up until now.)

how to display extended ascii character in QTextEdit

All the ASCII codes greater than 127 are replaced by Diamond? symbol. How can I display those characters. I have an unsigned char buffer[1024] which contains values from 0 to 256.
Use the QString class's fromAscii() method. By default this will treat Ascii chars above 128 as Latin-1 chars. To change this behavior use QTextCodec::setCodecForCStrings method to set the correct codec for your usage.
I believe QT5 may have taken out the setCodecForCStrings method.
EDIT: Adnan supplied the QT5 alternative to setCodecForCStrings method, adding to answer for completeness.
Qt5 alternative for setCodecForCStrings is QTextCodec::setCodecForLocale(QTextCodec::codecForName("UTF-8"));
This is a rabbit hole with no end. Qt does not fully support printing ascii > 127 as it is not well defined. The current method is to use "fromLocal8bit()" which will take a char array and transform it into the "right" Unicode string (the only thing Qt supports printing).
QTextCodec::setCodecForLocale can be used to identify the character set you wish to transform from. Many codecs are supported, but for some reason IBM437 (the character set used by IBM PCs in the US for decades) is not supported, where several other codecs used by Europe, etc. are. Probably some characters in IBM437 were never assigned proper code points in Unicode, so transforming it isn't possible?
What's frustrating is that there are fonts with all 256 ascii code points, but it is simply not possible to display these in Qt as they only work with Unicode strings. There are a handful of glyphs they don't support, and it seems to grow with newer versions of Qt. Currently I know of 9, 10, 12, 13, and 173. Some of these are for obvious reasons (usually you don't want to print a carriage return glyph, though it did exist in DOS), but others used to work in Qt and now do not.
In my application, I resorted to creating a new font that has copies of the unprintable glyphs in higher unicode codepoints, and translate them before printing them on the screen. It's quite silly but Qt gave up on ascii many years ago, so it's the best option I could find.

what locale does wstring support?

In my program I used wstring to print out text I needed but it gave me random ciphers (those due to different encoding scheme). For example, I have this block of code.
wstring text;
text.append(L"Some text");
Then I use directX to render it on screen. I used to use wchar_t but I heard it has portability problem so I switched to swtring. wchar_t worked fine but it seemed only took English character from what I can tell (the print out just totally ignore the non-English character entered), which was fine, until I switch to wstring: I only got random ciphers that looked like Chinese and Korean mixed together. And interestingly, my computer locale for non-unicode text is Chinese. Based on what I saw I suspected that it would render Chinese character correctly, so then I tried and it does display the charactor correctly but with a square in front (which is still kind of incorrect display). I then guessed the encoding might depend on the language locale so I switched the locale to English(US) (I use win8), then I restart and saw my Chinese test character in the source file became some random stuff (my file is not saved in unicode format since all texts are English) then I tried with English character, but no luck, the display seemed exactly the same and have nothing to do with the locale. But I don't understand why it doesn't display correctly and looked like asian charactor (even I use English locale).
Is there some conversion should be done or should I save my file in different encoding format? The problem is I wanted to display English charactore correctly which is the default.
In the absence of code that demonstrates your problem, I will give you a correspondingly general answer.
You are trying to display English characters, but see Chinese characters. That is what happens when you pass 8 bit ANSI text to an API that receives UTF-16 text. Look for somewhere in your program where you cast from char* to wchar_t*.
First of all what is type of file you are trying to store text in?Normal txt files stores in ANSI by default (so does excel). So when you are trying to print a Unicode character to a ANSI file it will print junk. Two ways of over coming this problem is:
try to open the file in UTF-8 or 16 mode and then write
convert Unicode to ANSI before writing in file. If you are using windows then MSDN provides particular API to do Unicode to ANSI conversion and vice-verse. If you are using Linux then Google for conversion of Unicode to ANSI. There are lot of solution out there.
Hope this helps!!!
std::wstring does not have any locale/internationalisation support at all. It is just a container for storing sequences of wchar_t.
The problem with wchar_t is that its encoding is unspecified. It might be Unicode UTF-16, or Unicode UTF-32, or Shift-JIS, or something completely different. There is no way to tell from within a program.
You will have the best chances of getting things to work if you ensure that the encoding of your source code is the same as the encoding used by the locale under which the program will run.
But, the use of third-party libraries (like DirectX) can place additional constraints due to possible limitations in what encodings those libraries expect and support.
Bug solved, it turns out to be the CASTING problem (not rendering problem as previously said).
The bugged text is a intermediate product during some internal conversion process using swtringstream (which I forgot to mention), the code is as follows
wstringstream wss;
wstring text;
textToGenerate.append(L"some text");
wss << timer->getTime()
text.append(wss.str());
Right after this process the debugger shows the text as a bunch of random stuff but later somehow it converts back so it's readable. But the problem appears at rendering stage using DirectX. I somehow left the casting for wchar_t*, which results in the incorrect rendering.
old:
LPCWSTR lpcwstrText = (LPCWSTR)textToDraw->getText();
new:
LPCWSTR lpcwstrText = (*textToDraw->getText()).c_str();
By changing that solves the problem.
So, this is resulted by a bad cast. As some kind people provided correction to my statement.

Convert Glyph indices to Unicode character

I am working on a printer driver sample. In this sample i am hooking to DrvTextOut() call. when the call back is called i get the text as Glyph indices. i want to convert these Glyph indices to Unicode character.
please let me know how to convert it.
In general the answer is "you cannot". In PDFs, for instance, you might have an embedded character map that lets you look up the characters corresponding to the glyphs (e.g. if you used the cmap package with pdfLaTeX to make the the PDF), but glyphs are private to a font, and there may be many glyphs that get used for the same character and vice versa, thanks to the magic of the GSUB tables.
If you're really desperate and have access to the font in question, you could try to build a character map yourself from the font file, but you better know which font you are currently looking at.
Edit: I think your question is tagged poorly; are you referring to this function? Perhaps the FONTOBJ structure that you already own exposes some sort of character map of the font, I wouldn't know.
If you mean you need to handle the case where STROBJ->flAccel has SO_GLYPHINDEX_TEXTOUT set, then see this answer here from Microsoft's Bobby Mattappally:
http://www.winvistatips.com/glyph-handles-drvtextout-t183048.html
There is not always a 1:1 mapping
between glyph indices and character
codes and vice versa. This is
expecially true with international
character sets like Hebrew, Arabic
etc.
So once you get this as
SO_GLYPHINDEX_TEXTOUT, your driver
should handle the glyph itself instead
of trying to convert it back to
Unicode.