How to print wide character string? - c++

The ISO-10646 code point value of Cyrillic lowercase a is 0x430, so I tried the following:
char u8str[] = u8"Cyrillic lowercase a is: \u0430.";
cout << u8str;
and
wchar_t wstr[] = L"Cyrillic lowercase a is: \u0430.";
wcout << wstr;
The Cyrillic lowercase a is successfully printed through u8str, but not wstr.
As for u8str, I've confirmed that its storage is initialized with the utf-8 encoding values of those characters (the Cyrillic lower case a occupies 2 bytes with the value D0 B0). All seems alright. The Cyrillic got printed correctly.
As for wstr, I suppose that each wchar_t in the wstr array is initialized with the numerical value of the encoding of the character in the execution wide-character set. While I don't fully understand what execution wide-character set is, I checked that the value of the Cyrillic lowercase a stored in the array is 0x430. Anyway, the Cyrillic letter doesn't get printed correctly. (Other characters are all OK.)
I'm a total novice for wchar_t stuffs, so my apology if this question is too elementary. What went wrong in my attempt printing the Cyrillic letter using wide character string? Is it an issue of the letter's representation in the execution wide-character set (what is this character set after all)? Or is it an issue about the incorrect usage of iostream facilities?

From: http://www.cplusplus.com/reference/iostream/cout/
A program should not mix output operations on cout with output operations on wcout (or with other wide-oriented output operations on stdout): Once an output operation has been performed on either, the standard output stream acquires an orientation (either narrow or wide) that can only be safely changed by calling freopen on stdout.
So only use one of the two.

Related

Only showing one character while printing in C++

This is my code:
auto text = new wchar_t[WCHAR_MAX];
GetWindowTextW(hEdit, text, WCHAR_MAX);
SetWindowTextW(hWnd, text);
printf_s((const char *)text);
While printing, the char (text), it only outputs one character to the console.
It is a WINAPI gui and a console running together. It sets the winapi title successfully and get the text successfully, but i have no idea why this is only printing out one character to the console...
You're performing a raw cast from a wide string to a narrow string. This conversion is never safe.
Wide strings are stored as two-byte words in Windows. In your case, the high byte of the first character is 0, and x86 is little-endian, so the print stops at the first character.

QTextBrowser not displaying non-english characters

I'm developing a Qt GUI application to parse out a custom windows binary file that stores unicode text using wchar_t (default UTF-16 encoding). I've constructed a QString using QString::fromWcharArray and passed it to QTextBrowser::insertPlainText like this
wchar_t *p = ; // pointer to a wchar_t string in the binary file
QString t = QString::fromWCharArray(p);
ui.logBrowser->insertPlainText(t);
The displayed text displays ASCII characters correctly, but non-ASCII characters are displayed as a rectangular box instead. I've followed the code in a debugger and p points to a valid wchar_t string and the constructed QString t is also a valid string matching the wchar_t string. The problem happens when printing it out on a QTextBrowser.
How do I fix this?
First of all read documentation. So depending on system you will have different encoding UCS-4 or UTF-16! What is the size of wchar_t?
Secondly there is alternative API: try QString::fromUtf16.
Finally what kind of character are you using? Hebrew/Cyrillic/Japanese/???. Are you sure those characters are supported by font you are using?

Why a Windows console with Chinese code page set can show a UTF-16 encoded character?

Per MSDN:
"For the Microsoft C/C++ compiler, the source and execution character sets are both ASCII."
C++03
2.1 Phases of translation
"..Any source file character not in the basic source character set
(2.2) is replaced by the universal-character-name that designates that
character. (An implementation may use any internal encoding, so long
as an actual extended character encountered in the source file, and
the same extended character expressed in the source file as a
universal-character-name (i.e. using the \uXXXX notation), are handled
equivalently.)"
2.13.2 Character literals
"A universal-character-name is translated to the encoding, in the
execution character set, of the character named. If there is no such
encoding, the universal-character-name is translated to an
implementation-defined encoding."
To test which execution character set is used by MSVC++, I wrote the following code:
wchar_t *str = L"中";
unsigned char *p = reinterpret_cast<unsigned char*>(str);
for (int i = 0; i < sizeof(L"中"); ++i)
{
printf ("%x ", *(p + i));
}
The output shows that 2d 4e 0 0, and 0x4e2d is the UTF-16 encoding of this Chinese character. So I conclude: UTF-16 is used as execution character set by MSVC (My version: 2012 4.5.50709)
After, I tried to print this character out to a Windows console. Since the default locale used by console is "C", I set the locale to code page 936 representing simplified Chinese characters.
// use the execution environment locale setting, which is 936
wchar_t *str = L"中";
char* locale = setlocale(LC_ALL, "");
wprintf (L"%ls\n", str);
Which outputs:
中
What I'm curious about is, how can a character encoded in UTF-16 be decoded by a Windows console whose locale(decoder) is set to non-UTF-16(MS code page 936)? How can that happen?
how can a character encoded in UTF-16 be decoded by a Windows console whose locale(decoder) is set to non-UTF-16
There are two ways you can write text to the console. The byte way, using the Win32 API WriteConsoleA, gives you characters from bytes interpreted using the console's code page ("ANSI"). The Unicode way, WriteConsoleW, receives a UTF-16LE string and writes the characters to the console directly without having to worry about what code page it is using.
The stdio function printf uses WriteConsoleA when the output is an interactive console. The wprintf function, from VS 2005 on at least, calls WriteConsoleW.
I think I get it.
In Microsoft C++ 2008(probably 2005+), CRT functions as wprintf, wcout are implemented such that they convert a wide string literal as L"中" encoded in UTF-16, under the hood, to match the current locale/code page setting. So what happens here is that L"中" is converted to bytes D6 D0 in code page 936 for simplified Chinese.
I was wrong that setlocale set the console code page. It just set the current program code page which is used by CRT functions during the "conversion". For changing console code page, command chcp or Win API SetConsoleOputputCP() achieves.
Since my console's default page is 936, that character can be correctly shown w/o problem.

Reading From A File Which Contains Unicode Characters

I have this huge file which contains unicode strings at the beginning (first ~10,000 character or so)
I don't care about the unicode part, parts I'm interested aren't unicode but whenever I try to read those parts I get '=', and if I were to load the entire file to char array and write to to some temporary file (without altering the data) with ofstream I get incorrect data actually all I get is a text file filled with Í If I were to remove the unicode part manually everything works fine, So it seems ifstream cannot deal with streams which contains unicode data, but if this assumption is true, is there any way to work on this file introducing a new library to my project?
Thanks,
EDIT: Here's a sample code, program reads from this file which contains characters (some, not all) that can't be represented in ASCII.
ifstream inFile("somefile");
inFile.seekg(0,ios_base::end);
size_t size = inFile.tellg();
inFile.seekg(0,ios_base::beg);
char *book = new char[size];
inFile.read(book,size);
for (int i = 0; i < size; i++) {
cout << book[i] << " " << i << endl; //book[i] will always be '='
}
ofstream outFile("TEST.txt");
outFile.write(book,size);
outFile.close();
Keith Thompson's question is very important. Depending on which Unicode encoding, writing a small C routine that reads (and discards) the Unicode characters can be trivial, or slightly more complex.
Supposing the encoding is UTF-8, you will have a problem determining when to stop discarding because ASCII is a subset of UTF-8, so any time you encounter an ASCII char, you might be tempted to say "this is it, we're back in ASCII land" and the next char still might be still outside the ASCII range.
So you need to read the file and determine where the last character>127 is. Anything after that is plain ASCII -- hopefully.
A text file is generally in just one encoding utf-8, utf-16 (big or little endian) or utf-32 (big or little) or ASCII or other ANSI code pages. Mixing of encoding is only possible in some custom ways.
That said, you will have to read both the data that you need and that you don't in the same encoding. If you know the format is utf-8 you could, depending on what you are going to do with the data, read the file as a binary file into char buffer piece by piece. Then you could API(s) like strnextc (on windows. equivalent API must be available on other platforms) to move character by character on the buffer. Once you reach the end - you could move the balance to the front of the buffer and load the rest of the buffer from the file.
In fact you could use the above approach in general for any encoding. But for utf-16, you could try using wifstream - provided the endianess of the file and the platform you would be running on is the same. And you need to check if the implementation of wifstream is good at handling change in endiness and is able to take care of BOM (byte order mark) - 2 byte sequence ("FE FF" or "FF FE") that is generally present at the beginning of a file - leave alone surrogate pairs.

UCS-4 to multi-byte conversion on Solaris

Why does this code:
char a[10];
wchar_t w[10] = L"ä"; // German a Umlaut
int e = wcstombs(a, w, 10);
return e == -1?
I am using Oracle Solaris Studio 10 on Solaris 11. The locale is Latin-1, which contains the German Umlauts. All docs I have found indicate (to me) that the conversion should succeed.
If I do this:
char a[10] = "ä"; // German a Umlaut
wchar_t w[10];
int e = mbstowcs(w, a, 10);
e = wcstombs(a, w, 10);
there is no error, but the result is wrong. (Some variant of upper A.)
I also tried wstostr with similar result.
1) verify that the correct value is getting into the wchar_t. The compiler producing the wide character string literal has to convert L"ä" from the source code encoding to the wide execution charset.
2) verify that the program's locale is correct. You can do this with printf("%s\n", setlocale(LC_ALL, NULL));
I suspect that the problem is 1) because for me even if the program's locale isn't set correctly I still get the expected output. To avoid problems with the source code encoding you can escape non-ascii characters like L"\x00E4".
1 #include <iostream>
2 #include <clocale>
3
4 int main () {
5 std::printf("%s\n", std::setlocale(LC_ALL, NULL)); // prints "C"
6
7 char a[10];
8 wchar_t w[10] = L"\x00E4"; // German a Umlaut
9 std::printf("0x%04x\n", (unsigned)w[0]); // prints "0x00e4"
10
11 std::setlocale(LC_ALL, "");
12 printf("%s\n", std::setlocale(LC_ALL, NULL)); // print something that indicates the encoding is ISO 8859-1
13 int e = std::wcstombs(a, w, 10);
14 std::printf("%i 0x%02x\n", e, (unsigned char)a[0]); // print "1 0xe4"
15 }
16
Character Sets in C and C++ Programs
In your source code you can use any character from the 'source character set', which is a superset of the 'basic source character set'. The compiler will convert characters in string and character literals from the source character set into the execution character set (or wide execution character set for wide string and character literals).
The issue is that the source character set is implementation dependent. Typically the compiler simply has to know what encoding you use for the source code and then it will accept any characters from that encoding. GCC has command line arguments for setting the source encoding, Visual Studio will assume that the source is in the user's codepage unless it detects one of the so-called Unicode signatures for UTF-8 or UTF-16, and Clang currently always uses UTF-8.
Once the compiler is using the right source character set for your code it will then produce string and character literals in the 'execution character set'. The execution character set is another superset of the basic source character set, and is also implementation dependent. GCC takes a command line argument to set the execution character set, VS uses the user's locale, and Clang uses UTF-8.
Because the source character set is implementation dependent, the portable way to write characters outside the basic set is to either use hex encoding to directly specify the numeric values to be used in execution, or (if you're not using C89/90) to use universal character names (UCNs), which are converted to the execution character set (or wide execution character set when used in wide string and character literals). UCNs look like \uNNNN or \UNNNNNNNN and specify the character from the Unicode character set with the code point value NNNN or NNNNNNNN. (Note that C99 and C++11 prohibit you from using surrogate code points, if you want a character from outside the BMP just directly write the character's value using \U.)
The source and execution character sets are determined at compile time and do not change based on the locale of the system running the program. That is, the program locale uses another encoding not necessarily matching the execution character set. The wide execution character set should correspond to the wide character encoding used by supported locales, however.
Solaris Studio's behavior
Oracle's compiler for Solaris has very simple behavior. For narrow string and character literals no particular source encoding is specified, bytes from the source code are simply used directly as the execution literal. This effectively means that the execution character set is the same as the encoding of the source files. For wide character literals the source bytes are converted using the system locale. This means that you have to save the source file using the locale encoding in order to get correct wide literals.
I suspect that your source code is being saved in an encoding other than the one specified by the locale, so your compiler was failing to produce the correct wide string literal from L"ä". Your editor might be using UTF-8. You can check using the following program.
1 #include <iostream>
2 #include <clocale>
3
4 int main () {
5 wchar_t w[10] = L"ä"; // German a Umlaut
6 std::printf("0x%04x 0x%04x\n", (unsigned)w[0], (unsigned)w[1]);
7 }
8
Since wcstombs can correctly convert the wide character 0x00E4 to the latin-1 encoding of 'ä' you want the above to display 0x00E4 0x0000. If the source code encoding is UTF-8 then you should see 0x00C3 0x00A4.
You may have to set the locale to understand German. Specifically you want the ctype facet.
Try this:
setlocale( LC_ALL, ".1252" );
or specifically this:
setlocale( LC_CTYPE, ".1252" );
You may have to search for a better codepage than ".1252". Good luck.
The codepage examples above are Windows. On Unixy systems try "de_DE" for the codepage.