UCS-4 to multi-byte conversion on Solaris - c++

Why does this code:
char a[10];
wchar_t w[10] = L"ä"; // German a Umlaut
int e = wcstombs(a, w, 10);
return e == -1?
I am using Oracle Solaris Studio 10 on Solaris 11. The locale is Latin-1, which contains the German Umlauts. All docs I have found indicate (to me) that the conversion should succeed.
If I do this:
char a[10] = "ä"; // German a Umlaut
wchar_t w[10];
int e = mbstowcs(w, a, 10);
e = wcstombs(a, w, 10);
there is no error, but the result is wrong. (Some variant of upper A.)
I also tried wstostr with similar result.

1) verify that the correct value is getting into the wchar_t. The compiler producing the wide character string literal has to convert L"ä" from the source code encoding to the wide execution charset.
2) verify that the program's locale is correct. You can do this with printf("%s\n", setlocale(LC_ALL, NULL));
I suspect that the problem is 1) because for me even if the program's locale isn't set correctly I still get the expected output. To avoid problems with the source code encoding you can escape non-ascii characters like L"\x00E4".
1 #include <iostream>
2 #include <clocale>
3
4 int main () {
5 std::printf("%s\n", std::setlocale(LC_ALL, NULL)); // prints "C"
6
7 char a[10];
8 wchar_t w[10] = L"\x00E4"; // German a Umlaut
9 std::printf("0x%04x\n", (unsigned)w[0]); // prints "0x00e4"
10
11 std::setlocale(LC_ALL, "");
12 printf("%s\n", std::setlocale(LC_ALL, NULL)); // print something that indicates the encoding is ISO 8859-1
13 int e = std::wcstombs(a, w, 10);
14 std::printf("%i 0x%02x\n", e, (unsigned char)a[0]); // print "1 0xe4"
15 }
16
Character Sets in C and C++ Programs
In your source code you can use any character from the 'source character set', which is a superset of the 'basic source character set'. The compiler will convert characters in string and character literals from the source character set into the execution character set (or wide execution character set for wide string and character literals).
The issue is that the source character set is implementation dependent. Typically the compiler simply has to know what encoding you use for the source code and then it will accept any characters from that encoding. GCC has command line arguments for setting the source encoding, Visual Studio will assume that the source is in the user's codepage unless it detects one of the so-called Unicode signatures for UTF-8 or UTF-16, and Clang currently always uses UTF-8.
Once the compiler is using the right source character set for your code it will then produce string and character literals in the 'execution character set'. The execution character set is another superset of the basic source character set, and is also implementation dependent. GCC takes a command line argument to set the execution character set, VS uses the user's locale, and Clang uses UTF-8.
Because the source character set is implementation dependent, the portable way to write characters outside the basic set is to either use hex encoding to directly specify the numeric values to be used in execution, or (if you're not using C89/90) to use universal character names (UCNs), which are converted to the execution character set (or wide execution character set when used in wide string and character literals). UCNs look like \uNNNN or \UNNNNNNNN and specify the character from the Unicode character set with the code point value NNNN or NNNNNNNN. (Note that C99 and C++11 prohibit you from using surrogate code points, if you want a character from outside the BMP just directly write the character's value using \U.)
The source and execution character sets are determined at compile time and do not change based on the locale of the system running the program. That is, the program locale uses another encoding not necessarily matching the execution character set. The wide execution character set should correspond to the wide character encoding used by supported locales, however.
Solaris Studio's behavior
Oracle's compiler for Solaris has very simple behavior. For narrow string and character literals no particular source encoding is specified, bytes from the source code are simply used directly as the execution literal. This effectively means that the execution character set is the same as the encoding of the source files. For wide character literals the source bytes are converted using the system locale. This means that you have to save the source file using the locale encoding in order to get correct wide literals.
I suspect that your source code is being saved in an encoding other than the one specified by the locale, so your compiler was failing to produce the correct wide string literal from L"ä". Your editor might be using UTF-8. You can check using the following program.
1 #include <iostream>
2 #include <clocale>
3
4 int main () {
5 wchar_t w[10] = L"ä"; // German a Umlaut
6 std::printf("0x%04x 0x%04x\n", (unsigned)w[0], (unsigned)w[1]);
7 }
8
Since wcstombs can correctly convert the wide character 0x00E4 to the latin-1 encoding of 'ä' you want the above to display 0x00E4 0x0000. If the source code encoding is UTF-8 then you should see 0x00C3 0x00A4.

You may have to set the locale to understand German. Specifically you want the ctype facet.
Try this:
setlocale( LC_ALL, ".1252" );
or specifically this:
setlocale( LC_CTYPE, ".1252" );
You may have to search for a better codepage than ".1252". Good luck.
The codepage examples above are Windows. On Unixy systems try "de_DE" for the codepage.

Related

Printing em-dash to console window using printf? [duplicate]

This question already has answers here:
Is it possible to cout an EM DASH on Linux and Windows? [duplicate]
(2 answers)
Closed 5 years ago.
A simple problem: I'm writing a chatroom program in C++ (but it's primarily C-style) for a class, and I'm trying to print, “#help — display a list of commands...” to the output window. While I could use two hyphens (--) to achieve roughly the same effect, I'd rather use an em-dash (—). printf(), however, doesn't seem to support printing em-dashes. Instead, the console just prints out the character, ù, in its place, despite the fact that entering em-dashes directly into the prompt works fine.
How do I get this simple Unicode character to show up?
Looking at Windows alt key codes, I find it interesting how alt+0151 is "—" and alt+151 is "ù". Is this related to my problem, or a simple coincidence?
the windows is unicode (UTF-16) system. console unicode as well. if you want print unicode text - you need (and this is most effective) use WriteConsoleW
BOOL PrintString(PCWSTR psz)
{
DWORD n;
return WriteConsoleW(GetStdHandle(STD_OUTPUT_HANDLE), psz, (ULONG)wcslen(psz), &n, 0);
}
PrintString(L"—");
in this case in your binary file will be wide character — (2 bytes 0x2014) and console print it as is.
if ansi (multi-byte) function is called for output console - like WriteConsoleA or WriteFile - console first translate multi-byte string to unicode via MultiByteToWideChar and in place CodePage will be used value returned by GetConsoleOutputCP. and here (translation) can be problem if you use characters > 0x80
first of all compiler can give you warning: The file contains a character that cannot be represented in the current code page (number). Save the file in Unicode format to prevent data loss. (C4819). but even after you save source file in Unicode format, can be next:
wprintf(L"ù"); // no warning
printf("ù"); //warning C4566
because L"ù" saved as wide char string (as is) in binary file - here all ok and no any problems and warning. but "ù" is saved as char string (single byte string). compiler need convert wide string "ù" from source file to multi-byte string in binary (.obj file, from which linker create pe than). and compiler use for this WideCharToMultiByte with CP_ACP (The current system default Windows ANSI code page.)
so what happens if you say call printf("ù"); ?
unicode string "ù" will be converted to multi-byte
WideCharToMultiByte(CP_ACP, ) and this will be at compile time. resulting multi-byte string will be saved in binary file
the console it run-time convert your multi-byte string to
wide char by MultiByteToWideChar(GetConsoleOutputCP(), ..) and
print this string
so you got 2 conversions: unicode -> CP_ACP -> multi-byte -> GetConsoleOutputCP() -> unicode
by default GetConsoleOutputCP() == CP_OEMCP != CP_ACP even if you run program on computer where you compile it. (on another computer with another CP_OEMCP especially)
problem in incompatible conversions - different code pages used. but even if you change console code page to your CP_ACP - convertion anyway can wrong translate some characters.
and about CRT api wprintf - here situation is next:
the wprintf first convert given string from unicode to multi-byte by using it internal current locale (and note that crt locale independent and different from console locale). and then call WriteFile with multi-byte string. console convert back this multi-bytes string to unicode
unicode -> current_crt_locale -> multi-byte -> GetConsoleOutputCP() -> unicode
so for use wprintf we need first set current crt locale to GetConsoleOutputCP()
char sz[16];
sprintf(sz, ".%u", GetConsoleOutputCP());
setlocale(LC_ALL, sz);
wprintf(L"—");
but anyway here i view (on my comp) - on screen instead —. so will be -— if call PrintString(L"—"); (which used WriteConsoleW) just after this.
so only reliable way print any unicode characters (supported by windows) - use WriteConsoleW api.
After going through the comments, I've found eryksun's solution to be the simplest (...and the most comprehensible):
#include <stdio.h>
#include <io.h>
#include <fcntl.h>
int main()
{
//other stuff
_setmode(_fileno(stdout), _O_U16TEXT);
wprintf(L"#help — display a list of commands...");
Portability isn't a concern of mine, and this solves my initial problem—no more ù—my beloved em-dash is on display.
I acknowledge this question is essentially a duplicate of the one linked by sata300.de. Albeit, with printf in the place of cout, and unnecessary ramblings in the place of relevant information.

c++ Lithuanian language, how to get more than ascii

I am trying to use Lithuanian in my c++ application, but every try is unsuccesfull.
Multi-byte character set is used. I have tryed everything i have tought of, i am new in c++. Never ever tryed to do something in Lithuanian.
Tryed every setlocale(LC_ALL, "en_US.utf8"); setlocale(LC_ALL, "Lithuanian");...
Researched for 2 hours and didnt found proper examples, solution.
I do have a average sized project which needs Lithuanian translation from database and it cant understand most of "ĄČĘĖĮŠŲŪąčęėįšųū".
Compiler - "Visual studio 2013"
Database - sqlite3.
I cant get simple strings to work(defined myself), and output as Lithuanian to win32 application, even.
In Windows use wide character strings (1UTF-16 encoding, wchar_t type) for internal text handling, and preferably UTF-8 for external text files and networking.
Note that Visual C++ will translate narrow text literals from the source encoding to Windows ANSI, which is a platform-dependent usually single-byte encoding (you can check which one via the GetACP API function), i.e., Visual C++ has the platform-specific Windows ANSI as its narrow C++ execution character set.
But also do note that for an app restricted to non-Windows platforms, i.e. Unix-land, it makes practical sense to do everything in UTF-8, based on char type.
For the database communication you may need to translate to and from the program's internal text representation.
This depends on what the database interface requires, which is not stated.
Example for console output in Windows:
#include <iostream>
#include <fcntl.h>
#include <io.h>
auto main() -> int
{
_setmode( _fileno( stdout ), _O_WTEXT );
using namespace std;
wcout << L"ĄČĘĖĮŠŲŪąčęėįšųū" << endl;
}
To make this compile by default with g++, the source code encoding needs to be UTF-8. Then, to make it produce correct results with Visual C++ the source code encoding needs to be UTF-8 with BOM, which happily is also accepted by modern versions of g++. For otherwise the Visual C++ compiler will assume the Windows ANSI encoding and produce an incorrect UTF-16 string.
Not coincidentally this is the default meaning of UTF-8 in Windows, e.g. in the Notepad editor, namely UTF-8 with BOM.
But note that while in Windows the problem is that the main system compiler requires a BOM for UTF-8, in Unix-land the problem is the opposite, that many old tools can't handle the BOM (for example, even MinGW g++ 4.9.1 isn't yet entirely up to speed: it sometimes includes the BOM bytes, then incorrectly interpreted, in error messages).
1) On other platforms wide character text can be encoded in other ways, e.g. with UTF-32. In fact the Windows convention is in direct conflict with the C and C++ standards which require that a single wchar_t should be able to encode any character in the extended character set. However, this requirement was, AFAIK, imposed after Windows adopted UTF-16, so the fault probably lies with the politics of the C and C++ standardization process, not yet another Microsoft'ism.
Complexity of internationalisation
There are several related but distinct topics that can cause mismatches between them, making try and error approach very tedious:
type used for storing strings and chars: windows iuses wchar_t by default, but for most APIs you have also char equivalents functions
character set encoding this defines how the chars stored in the type are to be understood. For exemple unicode (UTF8, UTF16, UTF32), 7 bits ascii, 8 bit ansii. In windows, by default it is UTF16 for wchar_t and ansi/windows for char
locale defines, among other things, the character set asumptions, when processing strings. This permit to use language independent functions like isalpha(i, loc), islower(i, loc), ispunct(i, loc) to find out if a given character is alphanumeric, a lower case alphabetic, or a punctuation, for example to bereak down a user text into words. C++ offers here portable functions.
output codepage or font used to show a character to the user. This assumes that the font used shows the characters using the same character set used in the code internals.
source code encoding. For example your editor could assume an ansi encoding, with windows 1252 character set.
Most typical errors
The problem n°1 is Win32 console output, as unicode is not well supported by the console. But this is not your problem here.
Another cause of mismatch is the encoding of your text editor. It might not be unicode, but use a windows code page. In this case, you type "Č", the deditor displays it as such, but editor might use windows 1257 encoding for lithuanian and store 0xC8 in the file. If you then display this literal with a windows unicode function, it will interpret 0xC8 as "latin E grave accent" and print something else, as the right unicode encoding for "Č" is 0x010C !
I can be even worse: the compiler may have its own assumption about character set encoding used and convert your litterals into unicode using false assumptions (it happened to me when I used some exotic code generation switch).
How to do ?
To figure out what goes wront, proceed by elimination:
First, for plain windows, use the native unicode setting. Ok it's UTF16 and wchar_t instead of UTF8 and as thus comes with some drawbacks, but it's native and well supported.
Then use explict unicode coding in litterals, for example TEXT("\u010C") instead of TEXT("Č"). This avoids editor and compiler mismatch.
If it's still not the right character, make sure that your font FULLY supports unicode. The default system font for instance doesn't while most other do. You can easily check with the windows font pannel (WindowKey+R fonts then click on "search char") to display the character table of your font.
Set fonts explicitely in your code
For example, a very tiny experiment :
...
case WM_PAINT:
{
hdc = BeginPaint(hWnd, &ps);
auto hf = CreateFont(24, 0, 0, 0, 0, TRUE, 0, 0, 0, 0, 0, 0, 0, L"Times New Roman");
auto hfOld = SelectObject(hdc, hf); // if you comment this out, € and Č won't display
TextOut(hdc, 50, 50, L"Test with éç € \u010C special chars", 30);
SelectObject(hdc, hfOld);
DeleteObject(hf);
EndPaint(hWnd, &ps);
break;
}

How to print wide character string?

The ISO-10646 code point value of Cyrillic lowercase a is 0x430, so I tried the following:
char u8str[] = u8"Cyrillic lowercase a is: \u0430.";
cout << u8str;
and
wchar_t wstr[] = L"Cyrillic lowercase a is: \u0430.";
wcout << wstr;
The Cyrillic lowercase a is successfully printed through u8str, but not wstr.
As for u8str, I've confirmed that its storage is initialized with the utf-8 encoding values of those characters (the Cyrillic lower case a occupies 2 bytes with the value D0 B0). All seems alright. The Cyrillic got printed correctly.
As for wstr, I suppose that each wchar_t in the wstr array is initialized with the numerical value of the encoding of the character in the execution wide-character set. While I don't fully understand what execution wide-character set is, I checked that the value of the Cyrillic lowercase a stored in the array is 0x430. Anyway, the Cyrillic letter doesn't get printed correctly. (Other characters are all OK.)
I'm a total novice for wchar_t stuffs, so my apology if this question is too elementary. What went wrong in my attempt printing the Cyrillic letter using wide character string? Is it an issue of the letter's representation in the execution wide-character set (what is this character set after all)? Or is it an issue about the incorrect usage of iostream facilities?
From: http://www.cplusplus.com/reference/iostream/cout/
A program should not mix output operations on cout with output operations on wcout (or with other wide-oriented output operations on stdout): Once an output operation has been performed on either, the standard output stream acquires an orientation (either narrow or wide) that can only be safely changed by calling freopen on stdout.
So only use one of the two.

Why a Windows console with Chinese code page set can show a UTF-16 encoded character?

Per MSDN:
"For the Microsoft C/C++ compiler, the source and execution character sets are both ASCII."
C++03
2.1 Phases of translation
"..Any source file character not in the basic source character set
(2.2) is replaced by the universal-character-name that designates that
character. (An implementation may use any internal encoding, so long
as an actual extended character encountered in the source file, and
the same extended character expressed in the source file as a
universal-character-name (i.e. using the \uXXXX notation), are handled
equivalently.)"
2.13.2 Character literals
"A universal-character-name is translated to the encoding, in the
execution character set, of the character named. If there is no such
encoding, the universal-character-name is translated to an
implementation-defined encoding."
To test which execution character set is used by MSVC++, I wrote the following code:
wchar_t *str = L"中";
unsigned char *p = reinterpret_cast<unsigned char*>(str);
for (int i = 0; i < sizeof(L"中"); ++i)
{
printf ("%x ", *(p + i));
}
The output shows that 2d 4e 0 0, and 0x4e2d is the UTF-16 encoding of this Chinese character. So I conclude: UTF-16 is used as execution character set by MSVC (My version: 2012 4.5.50709)
After, I tried to print this character out to a Windows console. Since the default locale used by console is "C", I set the locale to code page 936 representing simplified Chinese characters.
// use the execution environment locale setting, which is 936
wchar_t *str = L"中";
char* locale = setlocale(LC_ALL, "");
wprintf (L"%ls\n", str);
Which outputs:
中
What I'm curious about is, how can a character encoded in UTF-16 be decoded by a Windows console whose locale(decoder) is set to non-UTF-16(MS code page 936)? How can that happen?
how can a character encoded in UTF-16 be decoded by a Windows console whose locale(decoder) is set to non-UTF-16
There are two ways you can write text to the console. The byte way, using the Win32 API WriteConsoleA, gives you characters from bytes interpreted using the console's code page ("ANSI"). The Unicode way, WriteConsoleW, receives a UTF-16LE string and writes the characters to the console directly without having to worry about what code page it is using.
The stdio function printf uses WriteConsoleA when the output is an interactive console. The wprintf function, from VS 2005 on at least, calls WriteConsoleW.
I think I get it.
In Microsoft C++ 2008(probably 2005+), CRT functions as wprintf, wcout are implemented such that they convert a wide string literal as L"中" encoded in UTF-16, under the hood, to match the current locale/code page setting. So what happens here is that L"中" is converted to bytes D6 D0 in code page 936 for simplified Chinese.
I was wrong that setlocale set the console code page. It just set the current program code page which is used by CRT functions during the "conversion". For changing console code page, command chcp or Win API SetConsoleOputputCP() achieves.
Since my console's default page is 936, that character can be correctly shown w/o problem.

Printing Copyright Symbol in Visual Studio 2010

I have been struggling to print Copyright symbol in Windows using Visual Studio. I understand that 0xA9 is the ASCII code for Copyright symbol and it works on non-windows platform. But on Windows I can't print the Copyright symbol using the same code.
#include "iostream.h"
using namespace std;
int main(int argc, char * argv[])
{
cout << (char)0xA9 << " Copyright symbol" << endl;
return 0;
}
Output on Linux/HP-UX and AIX: © Copyright symbol
Output on Windows: ⌐ Copyright symbol
I am new to Windows, can someone help me out.
As Basile points out, the copyright symbol (©) is not an ASCII character. In other words, it is not one of the characters included in the 7-bit ASCII character set.
You need to switch to a Unicode encoding in order to use "special" characters like this that extend beyond the range of 7-bit ASCII. That's not difficult in Windows, it just requires that you use wide characters (wchar_t) instead of narrow characters (char). Unlike most Unix-based systems that implement Unicode support using UTF-8 (which uses the regular char data type), Windows does not have built-in support for UTF-8. It uses UTF-16 instead, which requires that you use the larger wchar_t type.
Conveniently, the C++ standard library also supports wide character strings, you just need to use the appropriate versions of the classes. The ones you want will have a w appended to the front of their names. So, rewriting your code to use wide (Unicode) characters on Windows, it would look like this:
#include <iostream> // (standard C++ headers should be in angle brackets)
int main(int argc, char * argv[])
{
std::wcout << (wchar_t)0xA9 << " Copyright symbol" << std::endl;
return 0;
}
The reason you're getting that strange ⌐ character when you try the original code on Windows is that that character is what the value 0xA9 maps to in your default Windows character set. You see, the char type supports 8-bit values, but I said above that the ASCII character set only defines 7 bits worth of characters. That extra bit is used on Windows to define some additional useful characters.
There are two different sets of extended narrow (non-Unicode) character sets, one is called the OEM character set and the other is (for historical reasons) called the ANSI character set. Generally, the Command Prompt uses the OEM character set, which fills most of the upper range with characters for drawing lines, boxes, and other simulated graphics in a text-based environment. Legacy, non-Unicode Windows applications generally use the ANSI character set, which is specific to your localized version of Windows and fills the upper range with characters needed to display all of the letters/symbols in your language.
If it sounds complicated, that's because it is. That's why everyone has forgotten all of this stuff and uses exclusively Unicode on Windows. I strongly recommend that path to you as well. :-)
Edit: Nuts, I forgot it was more complicated than this. Changing your code to output wide characters may not be sufficient. The Windows Command Prompt is broken backwards-compatible in all sorts of ways, severely hobbling its support for Unicode characters.
By default, it uses raster fonts which probably don't even provide symbols for most of the Unicode characters (the copyright symbol is likely common enough to be an exception). You need to change the font used by the Command Prompt to something else like Lucida Console or Consolas to ensure that it works correctly. Fortunately, you can set the defaults for all Command Prompt windows. Unfortunately, this is a per-user setting.
Additionally, the Command Prompt still uses the active code page, so all of that stuff I was explaining above is still relevant and you can't forget about it. You can change the particular code page that it uses with the chcp xxxx command, where xxxx is the number of the code page you wish to use. Unfortunately, this applies only to the current console session and must be reset each time. Not a good solution for an application program that needs to output Unicode characters.
More information on these problems and how to output Unicode strings on the Command Prompt is available in the answers to these questions:
What encoding/code page is cmd.exe using?
Can command prompt display unicode characters?
Notice that 0xa9 is not ASCII (which had 7 bits characters, limited to 0 - 0x7f range). It could be ISO/IEC 8859-1. Many current systems (including most Linux terminals today) use UTF-8 these days, in which the copyright glyph is encoded by two bytes, so you would code "\302\251" or "\xc2\xa9" in your C or C++ source. So your program don't display a copyright sign in my Linux xfce4-terminal which uses UTF-8.
Some Windows machines had different encoding systems.
I would setup your system (be it Linux or Windows) to use UTF8 character encoding, if possible, on its terminal (or use UTF16 wide chars). Read about UTF-8 everywhere.
An ASCII conventional evocation of copyright is very commonly (C) precisely because the ASCII encoding does not have any copyright glyph.
Taken and adapted from here:
#if defined(WIN32)
#include <windows.h>
#endif
#include <stdio.h>
void print_copyright_hint() {
printf("Copyright ");
#if defined(WIN32)
auto copyright = const_cast<wchar_t *>(L"©");
auto handle = GetStdHandle(STD_OUTPUT_HANDLE);
WriteConsoleW(handle, copyright, static_cast<DWORD>(wcslen(copyright)), nullptr, nullptr);
#else
printf("©");
#endif
printf(" my Company");
}
You can use alt+0169.
Forgive me if i am wrong.