The Standard says N3797::3.9.1 [basic.fundamental]:
Type wchar_t is a distinct type whose values can represent distinct
codes for all members of the largest extended character set specified
among the supported locales (22.3.1).
I can't imagine how we can use that type. Could you give an example where plain char isn't working? I thought it may be helpful if we use two different language simultaneously. But plain char is Ok in case for cyrillic and latinica
#include <iostream>
char cp[] = "LATINICA_КИРИЛЛИЦА";
int main()
{
std::cout << cp; //LATINICA_КИРИЛЛИЦА
}
DEMO
In your example, you use Unicode. Indeed you could type not only in Latin or Cyrillic, but in Thai, Arabic, Chinese in other words any Unicode symbol. Your example with some more symbols link
The case is in encoding. In your example you are using char to store Unicode symbols encoded in UTF-8. See this for more details. The main advantage of UTF-8 in backward compatibility with ASCII. The main disadvantage of using UTF-8 is variable symbol length.
There are other types of encoding for Unicode symbols. The most common (except UTF-8) are UTF-16 and UTF-32. You should be aware that the UTF-16 encoding is still variable length, however the code unit is now 16bit. UTF-32 encoding is constant length.
The type wchar_t is usually used to store symbols in UTF-16 or UTF-32 encoding depending on the system.
It depends what encoding you decide to use. Any single UTF-8 value can be held in an 8-bit char (though one Unicode code-point can take several char values to represent). It's impossible to tell from your question, but I'd guess that your editor and compiler are treating your strings as UTF-8 and that's fine if that's what you want.
Other common encodings include UTF-16, UTF-32, UCS-2 and UCS-4, which have 2-byte, 4-byte, 2-byte and 4-byte values respectively. You can't store these values in an 8-bit char.
The decision of what encoding to use for any given purpose is not straightforward. The main considerations are:
What other systems does your code have to interface to and what encoding do they use?
What libraries do you want to use and what encodings do they use? (eg xerces-c uses UTF-16 throughout)
The tradeoff between complexity and storage size. UTF-32 and UCS-4 have the useful property that every possible displayed character is represented by one value, so you can tell the length of the string from how much memory it takes up without having to look at the values in it (though this assumes that you consider combining diacretic marks as separate characters). However, if all you're representing is ASCII, they take up four times as much memory as UTF-8.
I'd suggest Joel Spolsky's essay on Unicode as a good read.
wchar_t has its own problems, though. The standard didn't specify how big a wchar_t is, so, of course, different compilers have picked different sizes; VC++ used two bytes and gcc (and most others) use four bytes. Wide-character literals, such as L"Hello, world," are similarly confused, being UTF-16 strings in VC++ and UCS-4 in gcc.
To try to clean this up, C++11 introduced two new character types:
char16_t is a character guaranteed to be 16-bits, and with a literal form u"Hello, world."
char32_t is a character guaranteed to be 32-bits, and with a literal form U"Hello, world."
However, these have problems of their own; in particular, <iostream> doesn't provide console streams that can handle them (ie there is no u16cout or u32cerr).
To be more specific I'll provide a normative reference relates to the question: [N3797:8.5.2/1 [dcl.init.string] says:
An array of narrow character type (3.9.1), char16_t array, char32_t
array, or wchar_t array can be initialized by a narrow string literal,
char16_t string literal, char32_t string literal, or wide string
literal, respectively, or by an appropriately-typed string literal
enclosed in braces (2.14.5). Successive characters of the value of the
string literal initialize the elements of the array.
8.5.2/2:
There shall not be more initializers than there are array elements.
In the case of
#include <iostream>
char cp[] = "LATINICA_КИРИЛЛИЦА";
int main()
{
std::cout << sizeof(cp) << std::endl; //28
}
DEMO
for some language, like English, it's not necessary to use wchar_t.but some language, like Chinese, you'd better use wchar_t.
although char is able to store string, likechar p[] = "你好"
but it may show messy code when you run you program in different computer, especially the computer using different language.
if you use wchar_t, you can avoid this.
Related
I attempt to initialise some diacritical Latin letters using the new char8_t type:
constexpr char8_t french_letter_A_1 = 'À';//does not function properly
However, Visual Studio 2019 suggests me the following “character represented by universal-character-name "\u(the name)" cannot be represented in the current code page”, and the character cannot be properly displayed; If I try to explicitly declare the character as a u8 one, like:
constexpr char8_t french_letter_A_2 = u8'Â';//has error
It even throws an error " a UTF-8 character literal value cannot occupy more than one code unit"; but non-diacritical letters can be successfully interpreted as a UTF-8 one:
constexpr char8_t french_letter_A_0 = u8'A';//but ASCII letters are fine
I am wondering how can I properly declare a UTF-8 character with Visual C++... or I misunderstand the concept of char8_t, and should rather use something else instead?
Edit: I have comprehended that char8_t does not support those non-ASCII characters. What character type should I use instead?
char8_t, like char, signed char, and unsigned char, has a size of 1 byte. On most platforms (but not all!), that means it is an 8-bit type only capable of holding 256 discrete values. Unicode 12.1 defines 137,994 characters. Clearly, they can't all fit in a single char8_t value!
The C and C++ "character" types are, regrettably, poorly named. If we were designing a new language with modern terminology, we would name them some variation of code_unit as that better reflects how they are actually used. char32_t is the only character type that is currently guaranteed to be able to hold a code point value for every character in its associated character set (the C and C++ standards claim that wchar_t can too, but that contradicts existing practice).
Looking at your example, À is U+00C0 {LATIN CAPITAL LETTER A WITH GRAVE} (or is it actually A U+0041 {LATIN CAPITAL LETTER A} followed by ̀ U+0300 {COMBINING GRAVE ACCENT}? Unicode is tricky that way). The UTF-8 encoding of U+00C0 is 0xC3 0x80. What value should french_letter_A_1 hold? It can't hold both code unit values. And if the value were to be the code point, then we're either in the situation that only 256 characters can be (portably) supported or, worse, that sometimes values of char8_t are code points and sometimes they are code units.
The reality is that C and C++ character literals are limited to just a few more characters than are in the basic source character set. That is sufficient if one is writing an English-only application. But for modern applications, character literals have limited use.
As Nicol already stated, working with most characters outside the basic source character set requires doing real text processing on strings. Unfortunately, the C and C++ standards do not provide much help there. That is something that SG16 is working to improve.
UTF-8 is an encoding for Unicode codepoints. In UTF-8, a codepoint is broken down into one or more "octets" (8-bit words) called UTF-8 code units. The C++20 type that represents a UTF-8 code unit is char8_t.
A single char8_t is only one UTF-8 code unit. Therefore, it can only represent a Unicode codepoint whose UTF-8 encoding only takes up 1 code unit. Visual Studio is telling you that the "character" you are trying to store in a char8_t requires more than 1 code unit and therefore cannot be stored in such a type. The only Unicode code points that UTF-8 encodes in a single code unit are the ASCII code points.
When dealing with UTF-8 (or any Unicode encoding that isn't UTF-32 for that matter), you do not deal in "characters"; you deal in strings: contiguous sequences of code units. Anytime you want to deal with UTF-8, you should be using some kind of char8_t-based string type.
According to cppreference.com's doc on wchar_t:
wchar_t - type for wide character representation (see wide strings). Required to be large enough to represent any supported character code point (32 bits on systems that support Unicode. A notable exception is Windows, where wchar_t is 16 bits and holds UTF-16 code units) It has the same size, signedness, and alignment as one of the integer types, but is a distinct type.
The Standard says in [basic.fundamental]/5:
Type wchar_t is a distinct type whose values can represent distinct codes for all members of the largest extended character set specified among the supported locales. Type wchar_t shall have the same size, signedness, and alignment requirements as one of the other integral types, called its underlying type. Types char16_t and char32_t denote distinct types with the same size, signedness, and alignment as uint_least16_t and uint_least32_t, respectively, in <cstdint>, called the underlying types.
So, if I want to deal with unicode characters, should I use wchar_t?
Equivalently, how do I know if a specific unicode character is "supported" by wchar_t?
So, if I want to deal with unicode characters, should I use
wchar_t?
First of all, note that the encoding does not force you to use any particular type to represent a certain character. You may use char to represent Unicode characters just as wchar_t can - you only have to remember that up to 4 chars together will form a valid code point depending on UTF-8, UTF-16, or UTF-32 encoding, while wchar_t can use 1 (UTF-32 on Linux, etc) or up to 2 working together (UTF-16 on Windows).
Next, there is no definite Unicode encoding. Some Unicode encodings use a fixed width for representing codepoints (like UTF-32), others (such as UTF-8 and UTF-16) have variable lengths (the letter 'a' for instance surely will just use up 1 byte, but apart from the English alphabet, other characters surely will use up more bytes for representation).
So you have to decide what kind of characters you want to represent and then choose your encoding accordingly. Depending on the kind of characters you want to represent, this will affect the amount of bytes your data will take. E.g. using UTF-32 to represent mostly English characters will lead to many 0-bytes. UTF-8 is a better choice for many Latin based languages, while UTF-16 is usually a better choice for Eastern Asian languages.
Once you have decided on this, you should minimize the amount of conversions and stay consistent with your decision.
In the next step, you may decide what data type is appropriate to represent the data (or what kind of conversions you may need).
If you would like to do text-manipulation/interpretation on a code-point basis, char certainly is not the way to go if you have e.g. Japanese kanji. But if you just want to communicate your data and regard it no more as a quantitative sequence of bytes, you may just go with char.
The link to UTF-8 everywhere was already posted as a comment, and I suggest you having a look there as well. Another good read is What every programmer should know about encodings.
As by now, there is only rudimentary language support in C++ for Unicode (like the char16_t and char32_t data types, and u8/u/U literal prefixes). So chosing a library for manging encodings (especially conversions) certainly is a good advice.
wchar_t is used in Windows which uses UTF16-LE format. wchar_t requires wide char functions. For example wcslen(const wchar_t*) instead of strlen(const char*) and std::wstring instead of std::string
Unix based machines (Linux, Mac, etc.) use UTF8. This uses char for storage, and the same C and C++ functions for ASCII, such as strlen(const char*) and std::string (see comments below about std::find_first_of)
wchar_t is 2 bytes (UTF16) in Windows. But in other machines it is 4 bytes (UTF32). This makes things more confusing.
For UTF32, you can use std::u32string which is the same on different systems.
You might consider converting UTF8 to UTF32, because that way each character is always 4 bytes, and you might think string operations will be easier. But that's rarely necessary.
UTF8 is designed so that ASCII characters between 0 and 128 are not used to represent other Unicode code points. That includes escape sequence '\', printf format specifiers, and common parsing characters like ,
Consider the following UTF8 string. Lets say you want to find the comma
std::string str = u8"汉,🙂"; //3 code points represented by 8 bytes
The ASCII value for comma is 44, and str is guaranteed to contain only one byte whose value is 44. To find the comma, you can simply use any standard function in C or C++ to look for ','
To find 汉, you can search for the string u8"汉" since this code point cannot be represented as a single character.
Some C and C++ functions don't work smoothly with UTF8. These include
strtok
strspn
std::find_first_of
The argument for above functions is a set of characters, not an actual string.
So str.find_first_of(u8"汉") does not work. Because u8"汉" is 3 bytes, and find_first_of will look for any of those bytes. There is a chance that one of those bytes are used to represent a different code point.
On the other hand, str.find_first_of(u8",;abcd") is safe, because all the characters in the search argument are ASCII (str itself can contain any Unicode character)
In rare cases UTF32 might be required (although I can't imagine where!) You can use std::codecvt to convert UTF8 to UTF32 to run the following operations:
std::u32string u32 = U"012汉"; //4 code points, represented by 4 elements
cout << u32.find_first_of(U"汉") << endl; //outputs 3
cout << u32.find_first_of(U'汉') << endl; //outputs 3
Side note:
You should use "Unicode everywhere", not "UTF8 everywhere".
In Linux, Mac, etc. use UTF8 for Unicode.
In Windows, use UTF16 for Unicode. Windows programmers use UTF16, they don't make pointless conversions back and forth to UTF8. But there are legitimate cases for using UTF8 in Windows.
Windows programmer tend to use UTF8 for saving files, web pages, etc. So that's less worry for non-Windows programmers in terms of compatibility.
The language itself doesn't care which Unicode format you want to use, but in terms of practicality use a format that matches the system you are working on.
So, if I want to deal with unicode characters, should I use wchar_t?
That depends on what encoding you're dealing with. In case of UTF-8 you're just fine with char and std::string.
UTF-8 means the least encoding unit is 8 bits: all Unicode code points from U+0000 to U+007F are encoded by only 1 byte.
Beginning with code point U+0080 UTF-8 uses 2 bytes for encoding, starting from U+0800 it uses 3 bytes and from U+10000 4 bytes.
To handle this variable width (1 byte - 2 byte - 3 byte - 4 byte) char fits best.
Be aware that C-functions like strlen will provide byte-based results: "öö" in fact is a 2-character text but strlen will return 4 because 'ö' is encoded to 0xC3B6.
UTF-16 means the least encoding unit is 16 bits: all code points from U+0000 to U+FFFF are encoded by 2 bytes; starting from U+100000 4 bytes are used.
In case of UTF-16 you should use wchar_t and std::wstring because most of the characters you'll ever encounter will be 2-byte encoded.
When using wchar_t you can't use C-functions like strlen any more; you have to use the wide char equivalents like wcslen.
When using Visual Studio and building with configuration "Unicode" you'll get UTF-16: TCHAR and CString will be based on wchar_t instead of char.
It all depends what you mean by 'deal with', but one thing is for sure: where Unicode is concerned std::basic_string doesn't provide any real functionality at all.
In any particular program, you will need to perform X number of Unicode-aware operations, e.g. intelligent string matching, case folding, regex, locating word breaks, using a Unicode string as a path name maybe, and so on.
Supporting these operations there will almost always be some kind of library and / or native API provided by the platform, and the goal for me would be to store and manipulate my strings in such a way that these operations can be carried out without scattering knowledge of the underlying library and native API support throughout the code any more than necessary. I'd also want to future-proof myself as to the width of the characters I store in my strings in case I change my mind.
Suppose, for example, you decide to use ICU to do the heavy lifting. Immediately there is an obvious problem: an icu::UnicodeString is not related in any way to std::basic_string. What to do? Work exclusively with icu::UnicodeString throughout the code? Probably not.
Or maybe the focus of the application switches from European languages to Asian ones, so that UTF-16 becomes (perhaps) a better choice than UTF-8.
So, my choice would be to use a custom string class derived from std::basic_string, something like this:
typedef wchar_t mychar_t; // say
class MyString : public std::basic_string <mychar_t>
{
...
};
Straightaway you have flexibility in choosing the size of the code units stored in your container. But you can do much more than that. For example, with the above declaration (and after you add in boilerplate for the various constructors that you need to provide to forward them to std::basic_string), you still cannot say:
MyString s = "abcde";
Because "abcde" is a narrow string and various the constructors for std::basic_string <wchar_t> all expect a wide string. Microsoft solve this with a macro (TEXT ("...") or __T ("...")), but that is a pain. All we need to do now is to provide a suitable constructor in MyString, with signature MyString (const char *s), and the problem is solved.
In practise, this constructor would probably expect a UTF-8 string, regardless of the underlying character width used for MyString, and convert it if necessary. Someone comments here somewhere that you should store your strings as UTF-8 so that you can construct them from UTF-8 literals in your code. Well now we have broken that constraint. The underlying character width of our strings can be anything we like.
Another thing that people have been talking about in this thread is that find_first_of may not work properly for UTF-8 strings (and indeed some UTF-16 ones also). Well, now you can provide an implementation that does the job properly. Should take about half an hour. If there are other 'broken' implementations in std::basic_string (and I'm sure there are), then most of them can probably be replaced with similar ease.
As for the rest, it mainly depends what level of abstraction you want to implement in your MyString class. If your application is happy to have a dependency on ICU, for example, then you can just provide a couple of methods to convert to and from an icu::UnicodeString. That's probably what most people would do.
Or if you need to pass UTF-16 strings to / from native Windows APIs then you can add methods to convert to and from const WCHAR * (which again you would implement in such a way that they work for all values of mychar_t). Or you could go further and abstract away some or all of the Unicode support provided by the platform and library you are using. The Mac, for example, has rich Unicode support but it's only available from Objective-C so you have to wrap it.
It depends on how portable you want your code to be.
So you can add in whatever functionality you like, probably on an on-going basis as work progresses, without losing the ability to carry your strings around as a std::basic_string. Of one sort or another. Just try not to write code that assumes it knows how wide it is, or that it contains no surrogate pairs.
First of all, you should check (as you point out in your question) if you are using Windows and Visual Studio C++ with wchar_t being 16bits, because in that case, to use full unicode support, you'll need to assume UTF-16 encoding.
The basic problem here is not the sizeof wchar_t you are using, but if the libraries you are going to use, support full unicode support.
Java has a similar problem, as its char type is 16bit wide, so it couldn't a priori support full unicode space, but it does, as it uses UTF-16 encoding and the pair surrogates to cope with full 24bit codepoints.
It's also worth to note that UNICODE uses only the high plane to encode rare codepoints, that are not normally used daily.
For unicode support anyway, you need to use wide character sets, so wchar_t is a good beginning. If you are going to work with visual studio, then you have to check how it's libraries deal with unicode characters.
Another thing to note is that standard libraries deal with character sets (and this includes unicode) only when you add locale support (this requires some library to be initialized, e.g. setlocale(3)) and so, you'll see no unicode at all (only basic ascii) in cases where you have not called setlocale(3).
There are wide char functions for almost any str*(3) function, as well as for any stdio.h library function, to deal with wchar_ts. A little dig into the /usr/include/wchar.h file will reveal the names of the routines. Go to the manual pages for documentation on them: fgetws(3), fputwc(3), fputws(3), fwide(3), fwprintf(3), ...
Finally, consider again that, if you are dealing with Microsoft Visual C++, you have a different implementation from the beginning. Even if they cope to be completely standard compliant, you'll have to cope with some idiosyncrasies of having a different implementation. Probably you'll have different function names for some uses.
I want that strings with Unicode characters be correctly handled in my file synchronizer application but I don't know how this kind of encoding works ?
In a unicode string, I can see that a unicode char has this form : "\uxxxx" where xs are numbers, how a normal C or C++ program interpret this kind of char ? (why there is a 'u' after '\' ? what's the effect ?)
In internet I see examples using "wide strings or wchar_t ??
So, what's the suitable object to handle unicode characters ? In rapidJson (which supports Unicode, UTF-8, UTF-16, UTF-32) , we can use const char* to store a JSOn that could have "wide characters" but those characters take more than a byte to be represented... I don't understand...
This is the kind of temporary arrangement I found for the moment (unicode->utf8?ascii?, listFolder is a std::string) :
boost::replace_all(listFolder, "\\u00e0", "à");
boost::replace_all(listFolder, "\\u00e2", "â");
boost::replace_all(listFolder, "\\u00e4", "ä");
...
The suitable object to handle Unicode strings in C++ is icu::UnicodeString (check "API References, ICU4C" in the sidebar), at least if you want to really handle Unicode strings (as opposed to just passing them from one point of your application to another).
wchar_t was an early attempt at handling international character sets, which turned out to be a failure because Microsoft's definition of wchar_t as two bytes turned out to be insufficient once Unicode was extended beyond code point 0x10000. Linux defines wchar_t as four bytes, but the inconsistency makes it (and its derived std::wstring) rather useless for portable programming.
TCHAR is a Microsoft define that resolves to char by default and to WCHAR if UNICODE is defined, with WCHAR in turn being wchar_t behind a level of indirection... yeah.
C++11 brought us char16_t and char32_t as well as the corresponding string classes, but those are still instances of basic_string<>, and as such have their shortcomings e.g. when trying to uppercase / lowercase characters that have more than one replacement character (e.g. the German ß would require to be extended to SS in uppercase; the standard library cannot do that).
ICU, on the other hand, goes the full way. For example, it provides normalization and decomposition, which the standard strings do not.
\uxxxx and \UXXXXXXXX are unicode character escapes. The xxxx are a 16-bit hexadecimal number representing a UCS-2 code point, which is equivalent to a UTF-16 code point within the Basic Multilingual Plane.
The XXXXXXXX are a 32-bit hex number, representing a UTF-32 code point, which may be any plane.
How those character escapes are handled depends on the context in which they appear (narrow / wide string, for example), making them somewhat less than perfect.
C++11 introduced "proper" Unicode literals:
u8"..." is always a const char[] in UTF-8 encoding.
u"..." is always a const uchar16_t[] in UTF-16 encoding.
U"..." is always a const uchar32_t[] in UTF-32 encoding.
If you use \uxxxx or \UXXXXXXXX within one of those three, the character literal will always be expanded to the proper code unit sequence.
Note that storing UTF-8 in a std::string is possible, but hazardous. You need to be aware of many things: .length() is not the number of characters in your string. .substr() can lead to partial and invalid sequences. .find_first_of() will not work as expected. And so on.
That being said, in my opinion UTF-8 is the only sane encoding choice for any stored text. There are cases to be made for handling texts as UTF-16 in-memory (the way ICU does), but on file, don't accept anything but UTF-8. It's space-efficient, endianess-independent, and allows for semi-sane handling even by software that is blissfully unaware of Unicode matters (see caveats above).
In a unicode string, I can see that a unicode char has this form : "\uxxxx" where xs are numbers, how a normal C or C++ program interpret this kind of char ? (why there is a 'u' after '\' ? what's the effect ?)
That is a unicode character escape sequence. It will be interpreted as a unicode character. The u after the escape character is part of the syntax and it's what differentiates it from other escape sequences. Read the documentation for more information.
So, what's the suitable object to handle unicode characters ?
char for uft-8
char16_t for utf-16
char32_t for utf-32
The size of wchar_t is platform dependent, so you cannot make portable assumptions of which encoding it suits.
we can use const char* to store a JSOn that could have "wide characters" but those characters take more than a byte to be represented...
If you mean that you can store multi-byte utf-8 characters in a char string, then you're correct.
This is the kind of temporary arrangement I found for the moment (unicode->utf8?ascii?, listFolder is a std::string)
What you're attempting to do there is replacing some unicode characters with characters that have a plaftorm defined encoding. If you have other unicode characters besides those, then you end up with a string that has mixed encoding. Also, in some cases it may accidentally replace parts of other byte sequences. I recommend using library to convert encoding or do any other manipulation on encoded strings.
From Wikipedia:
For the purpose of enhancing support for Unicode in C++ compilers, the definition of the type char has been modified to be at least the size necessary to store an eight-bit coding of UTF-8.
I'm wondering what exactly this means for writing portable applications. Is there any difference between writing this
const char[] str = "Test String";
or this?
const char[] str = u8"Test String";
Is there be any reason not to use the latter for every string literal in your code?
What happens when there are non-ASCII-Characters inside the TestString?
The encoding of "Test String" is the implementation-defined system encoding (the narrow, possibly multibyte one).
The encoding of u8"Test String" is always UTF-8.
The examples aren't terribly telling. If you included some Unicode literals (such as \U0010FFFF) into the string, then you would always get those (encoded as UTF-8), but whether they could be expressed in the system-encoded string, and if yes what their value would be, is implementation-defined.
If it helps, imagine you're authoring the source code on an EBCDIC machine. Then the literal "Test String" is always EBCDIC-encoded in the source file itself, but the u8-initialized array contains UTF-8 encoded values, whereas the first array contains EBCDIC-encoded values.
You quote Wikipedia:
For the purpose of enhancing support for Unicode in C++ compilers, the definition of the type char has been modified to be at least the size necessary to store an eight-bit coding of UTF-8.
Well, the “For the purpose of” is not true. char has always been guaranteed to be at least 8 bits, that is, CHAR_BIT has always been required to be ≥8, due to the range required for char in the C standard. Which is (quote C++11 §17.5.1.5/1) “incorporated” into the C++ standard.
If I should guess about the purpose of that change of wording, it would be to just clarify things for those readers unaware of the dependency on the C standard.
Regarding the effect of the u8 literal prefix, it
affects the encoding of the string in the executable, but
unfortunately it does not affect the type.
Thus, in both cases "tørrfisk" and u8"tørrfisk" you get a char const[n]. But in the former literal the encoding is whatever is selected for the compiler, e.g. with Latin 1 (or Windows ANSI Western) that would be 8 bytes for the characters plus a nullbyte, for array size 9. While in the latter literal the encoding is guaranteed to be UTF-8, where the “ø” will be encoded with 2 or 3 bytes (I don’t recall exactly), for a slightly larger array size.
If the execution character set of the compiler is set to UTF-8, it makes no difference if u8 is used or not, since the compiler converts the characters to UTF-8 in both cases.
However if the compilers execution character set is the system's non UTF8 codepage (default for e.g. Visual C++), then non ASCII characters might not properly handled when u8 is omitted. For example, the conversion to wide strings will crash e.g. in VS15:
std::string narrowJapanese("スタークラフト");
std::wstring_convert<std::codecvt_utf8_utf16<wchar_t>, wchar_t> convertWindows;
std::wstring wide = convertWindows.from_bytes(narrowJapanese); // Unhandled C++ exception in xlocbuf.
The compiler chooses a native encoding natural to the platform. On typical POSIX systems it will probably choose ASCII and something possibly depending on environment's setting for character values outside the ASCII range. On mainframes it will probably choose EBCDIC. Comparing strings received, e.g., from files or the command line will probably work best with the native character set. When processing files explicitly encoded using UTF-8 you are, however, probably best off using u8"..." strings.
That said, with the recent changes relating to character encodings a fundamental assumption of string processing in C and C++ got broken: each internal character object (char, wchar_t, etc.) used to represent one character. This is clearly not true anymore for a UTF-8 string where each character object just represents a byte of some character. As a result all the string manipulation, character classification, etc. functions won't necessarily work on these strings. We don't have any good library lined up to deal with such strings for inclusion into the standard.
I've recently tried to get the full picture about what steps it takes to create platform independent C++ applications that support unicode. A thing that is confusing to me is that most howtos and stuff equalize the character encoding (i.e. ANSI or Unicode) and the character type (char or wchar_t). As I've learned so far, these are different things and there may exist a character sequence encodeded in Unicode but represented by std::string as well as a character sequence encoded in ANSI but represented as std::wstring, right?
So the question that comes to my mind is whether the C++ standard gives any guarantee about the encoding of string literals starting with L or does it just say it's of type wchar_t with implementation specific character encoding?
If there is no such guaranty, does that mean I need some sort of external resource system to provide non ASCII string literals for my application in a platform independent way?
What is the prefered way for this? Resource system or proper encoding of source files plus proper compiler options?
The L symbol in front of a string literal simply means that each character in the string will be stored as a wchar_t. But this doesn't necessarily imply Unicode. For example, you could use a wide character string to encode GB 18030, a character set used in China which is similar to Unicode. The C++03 standard doesn't have anything to say about Unicode, (however C++11 defines Unicode char types and string literals) so it's up to you to properly represent Unicode strings in C++03.
Regarding string literals, Chapter 2 (Lexical Conventions) of the C++ standard mentions a "basic source character set", which is basically equivalent to ASCII. So this essentially guarantees that "abc" will be represented as a 3-byte string (not counting the null), and L"abc" will be represented as a 3 * sizeof(wchar_t)-byte string of wide-characters.
The standard also mentions "universal-character-names" which allow you to refer to non-ASCII characters using the \uXXXX hexadecimal notation. These "universal-character-names" usually map directly to Unicode values, but the standard doesn't guarantee that they have to. However, you can at least guarantee that your string will be represented as a certain sequence of bytes by using universal-character-names. This will guarantee Unicode output provided the runtime environment supports Unicode, has the appropriate fonts installed, etc.
As for string literals in C++03 source files, again there is no guarantee. If you have a Unicode string literal in your code which contains characters outside of the ASCII range, it is up to your compiler to decide how to interpret these characters. If you want to explicitly guarantee that the compiler will "do the right thing", you'd need to use \uXXXX notation in your string literals.
The C++03 does not mention unicode (future C++0x does). Currently you have to either use external libraries (ICU, UTF-CPP, etc.) or build your own solution using platform specific code. As others have mentioned, wchar_t encoding (or even size) is not specified. Consequently, string literal encoding is implementation specific. However, you can give unicode codepoints in string literals by using \x \u \U escapes.
Typically unicode apps in Windows use wchar_t (with UTF-16 encoding) as internal character format, because it makes using Windows APIs easier as Windows itself uses UTF-16. Unix/Linux unicode apps in turn usually use char (with UTF-8 encoding) internally. If you want to exchange data between different platforms, UTF-8 is usual choice for data transfer encoding.
The standard makes no mention of encoding formats for strings.
Take a look at ICU from IBM (its free). http://site.icu-project.org/