Printing universal characters - c++

Can anyone explain why universal character literals (eg "\u00b1") are being encoded into char strings as UTF-8? Why does the following print the plus/minus symbol?
#include <iostream>
#include <cstring>
int main()
{
std::cout << "\u00b1" << std::endl;
return 0;
}
Is this related to my current locale?

2.13.2. [...]
5/ A universal-character-name is translated to the encoding, in
the execution character set, of the character named. If there is no
such encoding, the universal-character-name is translated to an
implementation defined encoding. [Note: in translation phase 1, a
universal-character-name is introduced whenever an actual extended
character is encountered in the source text. Therefore, all extended
characters are described in terms of universal-character-names.
However, the actual compiler implementation may use its own native
character set, so long as the same results are obtained. ]
and
2.2. [...] The values of the members of the execution character sets
are implementation-defined, and any additional members are
locale-specific.
In short, the answer to your question is in your compiler documentation. However:
2.2. 2/ The character designated by the universal-character-name
\UNNNNNNNN is that character whose character short name in ISO/IEC
10646 is NNNNNNNN; the character designated by the
universal-character-name \uNNNN is that character whose character
short name in ISO/IEC 10646 is 0000NNNN. If the hexadecimal value for
a universal character name is less than 0x20 or in the range 0x7F-0x9F
(inclusive), or if the universal character name designates a character
in the basic source character set, then the program is illformed.
so you are guaranteed that the character you name is translated into an implementation defined encoding, possibly locale specific.

\u00b1 is the ± symbol as that is the correct unicode representation regardless of locale.
Your code at ideone, see here.

String literals e.g. "abcdef" are simple byte arrays (of type const char[]). Compiler encodes non-ASCII characters in them into something that is implementation-defined. Rumors say Visual C++ uses current Windows' ANSI codepage, and GCC uses UTF-8, so you're probably on GCC :)
So, \uABCD is interpreted by compiler at compile time and converted into the corresponding value in that encoding. I.e. it can put one or more bytes into the byte array:
sizeof("\uFE58z") == 3 // visual C++ 2010
sizeof("\uFE58z") == 5 // gcc 4.4 mingw
And yet, how cout will print the byte array, depends on locale settings. You can change stream's locale via std::ios_base::imbue() call.

C++ Character Sets
With the standardization of C++, it's useful to review some of the mechanisms included in the language for dealing with character sets. This might seem like a very simple issue, but there are some complexities to contend with.
The first idea to consider is the notion of a "basic source character set" in C++. This is defined to be:
all ASCII printing characters 041 - 0177, save for # $ ` DEL
space
horizontal tab
vertical tab
form feed
newline
or 96 characters in all. These are the characters used to compose a C++ source program.
Some national character sets, such as the European ISO-646 one, use some of these character positions for other letters. The ASCII characters so affected are:
[ ] { } | \
To get around this problem, C++ defines trigraph sequences that can be used to represent these characters:
[ ??(
] ??)
{ ??<
} ??>
| ??!
\ ??/
# ??=
^ ??'
~ ??-
Trigraph sequences are mapped to the corresponding basic source character early in the compilation process.
C++ also has the notion of "alternative tokens", that can be used to replace tokens with others. The list of tokens and their alternatives is this:
{ <%
} %>
[ <:
] :>
# %:
## %:%:
&& and
| bitor
|| or
^ xor
~ compl
& bitand
&= and_eq
|= or_eq
^= xor_eq
! not
!= not_eq
Another idea is the "basic execution character set". This includes all of the basic source character set, plus control characters for alert, backspace, carriage return, and null. The "execution character set" is the basic execution character set plus additional implementation-defined characters. The idea is that a source character set is used to define a C++ program itself, while an execution character set is used when a C++ application is executing.
Given this notion, it's possible to manipulate additional characters in a running program, for example characters from Cyrillic or Greek. Character constants can be expressed using any of:
\137 octal
\xabcd hexadecimal
\u12345678 universal character name (ISO/IEC 10646)
\u1234 -> \u00001234
This notation uses the source character set to define execution set characters. Universal character names can be used in identifiers (if letters) and in character literals:
'\u1234'
L'\u2345'
The above features may not yet exist in your local C++ compiler. They are important to consider when developing internationalized applications.

Related

Why are string literals parsed for trigraph sequences in Gnu gcc/g++?

Consider this innocuous C++ program:
#include <iostream>
int main() {
std::cout << "(Is this a trigraph??)" << std::endl;
return 0;
}
When I compile it using g++ version 5.4.0, I get the following diagnostic:
me#my-laptop:~/code/C++$ g++ -c test_trigraph.cpp
test_trigraph.cpp:4:36: warning: trigraph ??) ignored, use -trigraphs to enable [-Wtrigraphs]
std::cout << "(Is this a trigraph??)" << std::endl;
^
The program runs, and its output is as expected:
(Is this a trigraph??)
Why are string literals parsed for trigraphs at all?
Do other compilers do this, too?
Trigraphs were handled in translation phase 1 (they are removed in C++17, however). String literal related processing happens in subsequent phases. As the C++14 standard specifies (n4140) [lex.phases]/1.1:
The precedence among the syntax rules of translation is specified by
the following phases.
Physical source file characters are mapped, in an implementation-defined manner, to the basic source character set
(introducing new-line characters for end-of-line indicators) if
necessary. The set of physical source file characters accepted is
implementation-defined. Trigraph sequences ([lex.trigraph]) are
replaced by corresponding single-character internal representations.
Any source file character not in the basic source character set
([lex.charset]) is replaced by the universal-character-name that
designates that character. (An implementation may use any internal
encoding, so long as an actual extended character encountered in the
source file, and the same extended character expressed in the source
file as a universal-character-name (i.e., using the \uXXXX notation),
are handled equivalently except where this replacement is reverted in
a raw string literal.)
This happened first, because as you were told in comments, the characters that trigraphs stood for needed to be printable as well.
This behavious is inherited from C compilers and the old time when we used serial terminals where only 7 bits were used (the 8th being a parity bit). To allow non English languages with special characters (for example the accented àéèêîïôù in French or ñ in Spanish) the ISO/IEC 646 code pages used some ASCII (7bits) code to represent them. In particular, the codes 0x23, 0x24 (#$ in ASCII) 0x40 (#), 0x5B to 0x5E([\]^), 0x60 (`) and 0x7B to 0x7E ({|}~) could be replaced by national variants1.
As they have special meaning in C, they could be replaced in source code with trigraphs using only the invariant part of the ISO 646.
For compatibility reasons, this has been kept up to the C++14, when only dinosaurs still remember of the (not so good) days of ISO646 and 7 bits only code pages.
1 For example, the French variant used: 0x23 £, 0x40 à 0x5B-0x5D °ç§, 0x60 µ, 0x7B-0x7E éùè¨

What are the different character sets used for?

The C++ standard mentions multiple different character sets. In particular, it mentions the following character sets:
In 2.2 [lex.phases] bullet 1 physical source file characters and their mapping to the basic source character set is mentioned.
In 2.2 [lex.phases] bullet 2 execution character set is mentioned.
In 2.3 [lex.charset] paragraph 3 a basic execution character set and a basic execution wide-character set are mentioned.
The same section 2.3 [lex.charset] 3 also mentions an execution character set and an execution wide-character set.
When reading or writing files these use some other character set.
What are all those different character set used for, how are conversions between them done, and which of these values are locale dependent? In particular, how are string literals represented?
Here is a break down of the different character sets used by the compiler itself (all reference to the standard are for C++14, actually):
The physical source file characters are those used in the C++ source. Most likely these are now encoded using some Unicode encoding, e.g., UTF-8 or UTF-16. If you are from a European or an American background you may be using ASCII whose characters are conveniently encoded identically in UTF-8 (every ASCII file is a UTF-8 file but not the other way around). The physical source file characters_ may also be something unusual like EBCDIC.
The basic source character set is what the compiler, at least conceptually, consumes. It is produced from the physical source file characters and either mapping them to their respective basic character or to a sequence of basic characters representing the physical source character using a universal character name (see 2.2 [lex.phases] paragraph 1). The basic source character set is a just a set of 96 character (2.3 [lex.charset] paragraph 1):
a b c d e f g h i j k l m n o p q r s t u v w x y z A B C D E F G H I J K L M N O P Q R S T U V W X Y Z 0 1 2 3 4 5 6 7 8 9 _ { } [ ] # ( ) < > % : ; . ? * + - / ^ & | ~ ! = , \ " ’
and the 5 special characters space (' '), horizontal tab (\t), vertical tab (\v), form feed (\f), and newline (\n)
The mapping between the physical source character set and the basic character set is implementation defined.
The basic execution character set and the basic execution wide-character set are characters set capable of representing the basic source character set expanded by a few special character:
alert ('\a'), backspace ('\b'), carriage return ('\r'), and a null character ('\0')
The difference between the non-wide and the wide version is whether the characters are represented using char or wchar_t.
The execution character set and the execution wide-character set are implementation defined extensions of the basic character set and the basic wide-character set. In 2.3 [lex.charset] paragraph 3 it is stated that the additional members and the values of the additional members of execution character set are locale specific. It isn't clear which locale is referred to but I suspect the locale used during compilation is meant. In any case, the execution character sets are implementation defined (also according to 2.3 [lex.charset] paragraph 3).
Character and string literals are originally represented using the basic source character set with some characters possibly using universal character names. All of these are converted at compile time into the execution character set. According to 2.14.3 [lex.ccon] character literals representable as one char in the execution character set just work. If multiple chars are needed the character literals may be conditionally supported (and they'd have type int). For string literals the conversion is described in 2.14.5 [lex.string]. Paragraph 9 states that UTF-8 string literals (e.g. u8"hello") result in a sequence of values corresponding to the code units of the UTF-8 string. Otherwise the translation of characters and universal character names is the same as that for character literals (in particular, it is implementation defined) although characters resulting in multi-byte sequences for narrow string just result in multiple characters (this case isn't necessary support for character literals).
So far, only the result of compilation is considered. Any character which isn't part of a character or a string literal is used to specify what the code does. The interesting question is what happened to the literals? The literals are all basically translated into an implementation defined representation. That is implementation defined means that it is somewhere documented what is supposed to happen but it can differ between different implementations.
How does that help when dealing with characters or strings coming from somewhere? Well, any character or string which is read is converted to the corresponding execution character set. In particular, when a file is read, all characters are transformed to this common representation. Of course, for this transformation to work, the locale used for reading a file needs to be setup according to the encoding of that file. If the locale isn't explicitly mentioned the global locale is used which is initially determined by the system is used. The initial global locale is probably set somehow based on user preferences, e.g., based no environment variables. If a a file is read which uses a different encoding than this global locale, a corresponding different locale matching the encoding of the file needs to be used.
Correspondingly, when writing characters using one of the execution character sets, these are converted according to the encoding specified by the current locale. Again, it may be necessary to replace the locale if a specific encoding is needed.
All this effectively means that internally to a program all string and character processing happens using the implementation defined execution character set. All characters being read by a program need to be converted to this character set and all characters written start as characters in this execution character set and need to be converted appropriately to the external encoding. If course, in an ideal set up the conversion between the execution character set and the external representation is the identity, e.g., because the execution character set uses UTF-8 and the external representation also uses UTF-8. Correspondingly for the execution wide-character set except in this case UTF-16 would be used (one of the two variations as UTF-16 can either use big endian or little endian representation).

identifier character set (clang)

I never use clang.
And I accidentally discovered that this piece of code:
#include <iostream>
void функция(int переменная)
{
std::cout << переменная << std::endl;
}
int main()
{
int русская_переменная = 0;
функция(русская_переменная);
}
will compiles fine: http://rextester.com/NFXBL38644 (clang 3.4 (clang++ -Wall -std=c++11 -O2)).
Is it a clang extension ?? And why ?
Thanks.
UPD: I'm more asking why clang make such decision ? Because I never found the discussion that someone want more characters then c++ standard have now (2.3, rev. 3691)
It's not so much an extension as it is Clang's interpretation of the Multibyte characters part of the standard. Clang supports UTF-8 source code files.
As to why, I guess "why not?" is the only real answer; it seems useful and reasonable to me to support a larger character set.
Here are the relevant parts of the standard (C11 draft):
5.2.1 Character sets
1 Two sets of characters and their associated collating sequences shall be defined: the set in which source files are written (the source character set), and the set interpreted in the execution environment (the execution character set). Each set is further divided into a basic character set, whose contents are given by this subclause, and a set of zero or more locale-specific members (which are not members of the basic character set) called extended characters. The combined set is also called the extended character set. The values of the members of the execution character set are implementation-defined.
2 In a character constant or string literal, members of the execution character set shall be represented by corresponding members of the source character set or by escape sequences consisting of the backslash \ followed by one or more characters. A byte with all bits set to 0, called the null character, shall exist in the basic execution character set; it is used to terminate a character string.
3 Both the basic source and basic execution character sets shall have the following members: the 26 uppercase letters of the Latin alphabet
A B C D E F G H I J K L M
N O P Q R S T U V W X Y Z
the 26 lowercase letters of the Latin alphabet
a b c d e f g h i j k l m
n o p q r s t u v w x y z
the 10 decimal digits
0 1 2 3 4 5 6 7 8 9
the following 29 graphic characters
! " # % & ' ( ) * + , - . / :
; < = > ? [ \ ] ^ _ { | } ~
the space character, and control characters representing horizontal tab, vertical tab, and form feed. The representation of each member of the source and execution basic character sets shall fit in a byte. In both the source and execution basic character sets, the value of each character after 0 in the above list of decimal digits shall be one greater than the value of the previous. In source files, there shall be some way of indicating the end of each line of text; this International Standard treats such an end-of-line indicator as if it were a single new-line character. In the basic execution character set, there shall be control characters representing alert, backspace, carriage return, and new line. If any other characters are encountered in a source file (except in an identifier, a character constant, a string literal, a header name, a comment, or a preprocessing token that is never
converted to a token), the behavior is undefined.
4 A letter is an uppercase letter or a lowercase letter as defined above; in this International Standard the term does not include other characters that are letters in other alphabets.
5 The universal character name construct provides a way to name other characters.
And also:
5.2.1.2 Multibyte characters
1 The source character set may contain multibyte characters, used to represent members of the extended character set. The execution character set may also contain multibyte characters, which need not have the same encoding as for the source character set. For both character sets, the following shall hold:
— The basic character set shall be present and each character shall be encoded as a single byte.
— The presence, meaning, and representation of any additional members is locale- specific.
— A multibyte character set may have a state-dependent encoding, wherein each sequence of multibyte characters begins in an initial shift state and enters other locale-specific shift states when specific multibyte characters are encountered in the sequence. While in the initial shift state, all single-byte characters retain their usual interpretation and do not alter the shift state. The interpretation for subsequent bytes in the sequence is a function of the current shift state.
— A byte with all bits zero shall be interpreted as a null character independent of shift state. Such a byte shall not occur as part of any other multibyte character.
2 For source files, the following shall hold:
— An identifier, comment, string literal, character constant, or header name shall begin and end in the initial shift state.
— An identifier, comment, string literal, character constant, or header name shall consist of a sequence of valid multibyte characters.
Given clang's usage of UTF-8 as the source encoding, this behavior is mandated by the standard:
C++ defines an identifier as the following:
identifier:
identifier-nondigit
identifier identifier-nondigit
identifier digit
identifier-nondigit:
nondigit
universal-character-name
other implementation-defined characters
The important part here is that identifiers can include unversal-character-names. The specifications also lists allowed UCNs:
Annex E (normative)
Universal character names for identifier characters [charname]
E.1 Ranges of characters allowed [charname.allowed]
00A8, 00AA, 00AD, 00AF, 00B2-00B5, 00B7-00BA, 00BC-00BE, 00C0-00D6, 00D8-00F6, 00F8-00FF
0100-167F, 1681-180D, 180F-1FFF
200B-200D, 202A-202E, 203F-2040, 2054, 2060-206F
2070-218F, 2460-24FF, 2776-2793, 2C00-2DFF, 2E80-2FFF
3004-3007, 3021-302F, 3031-303F
3040-D7FF
F900-FD3D, FD40-FDCF, FDF0-FE44, FE47-FFFD
10000-1FFFD, 20000-2FFFD, 30000-3FFFD, 40000-4FFFD, 50000-5FFFD,
60000-6FFFD, 70000-7FFFD, 80000-8FFFD, 90000-9FFFD, A0000-AFFFD,
B0000-BFFFD, C0000-CFFFD, D0000-DFFFD, E0000-EFFFD
The cyrillic characters in your identifier are in the range 0100-167F.
The C++ specification further mandates that characters encoded in the source encoding be handled identically to UCNs:
Any source file character not in the basic source character set (2.3) is replaced by the universal-character-name that designates that character. (An implementation may use any internal encoding, so long as an actual extended character encountered in the source file, and the same extended character expressed in the source file as a universal-character-name (i.e., using the \uXXXX notation), are handled equivalently
— n3337 §2.2 Phases of translation [lex.phases]/1
So given clang's choice of UTF-8 as the source encoding, the spec mandates that these characters be converted to UCNs (or that clang's behavior be indistinguishable from performing such a conversion), and these UCNs are permitted by the spec to appear in identifiers.
It goes even further. Emoji characters happen to be in the ranges allowed by the C++ spec, so if you've seen some of those examples of Swift code with emoji identifiers and were surprised by such capability you might be even more surprised to know that C++ has exactly the same capability:
http://rextester.com/EPYJ41676
http://imgur.com/EN6uanB
Another fact that may be surprising is that this behavior isn't new with C++11; C++ has mandated this behavior since C++98. It's just that compilers ignored this for a long time: Clang implemented this feature in version 3.3 in 2013. According to this documentation Microsoft Visual Studio supports this in 2015.
Even today GCC 6.1 only supports UCNs in identifiers when they are written literally, and does not obey the mandate that any character in its extended source character set must be treated identically with the corresponding universal-character-name. E.g. gcc allows int \u043a\u043e\u0448\043a\u0430 = 10; but will not allow int кошка = 10; even with -finput-charset=utf-8.

C++ Compilation. Phase of translation #1. Universal character name

I can't understand what does it mean in c++ standard:
Any source file character not in the basic source character set (2.3)
is replaced by the universal-character-name that designates that
charac- ter. (An implementation may use any internal encoding, so long
as an actual extended character encountered in the source file, and
the same extended character expressed in the source file as a
universal-character-name (i.e., using the \uXXXX notation), are
handled equivalently except where this replacement is reverted in a
raw string literal.)
As I understand, if compiler sees charcter not in the basic character set it's just replaced it with sequence of characters in this format '\uNNNN' or '\UNNNNNNNN'. But I don't get how to obtain this NNNN or NNNNNNNN.
So this is my question: how to do conversion ?
Note the preceding sentence which states:
Physical source file characters are mapped, in an implementation-defined manner, to the basic source character set (introducing new-line characters for end-of-line indicators) if necessary.
That is, it's entirely up to the compiler how it actually interprets the characters or bytes that make up your file. In doing this interpretation, it must decide which of the physical characters belong to the basic source character set and which don't. If a character does not belong, then it is replaced with the universal character name (or at least, the effect is as if it had done).
The point of this is to reduce the source file down to a very small set of characters - there are only 96 characters in the basic source character set. Any character not in the basic source character set has been replaced by \, u or U, and some hexadecimal digits (0-F).
A universal character name is one of:
\uNNNN
\UNNNNNNNN
Where each N is a hexadecimal digit. The meaning of these digits is given in §2.3:
The character designated by the universal-character-name \UNNNNNNNN is that character whose character short name in ISO/IEC 10646 is NNNNNNNN; the character designated by the universal-character-name \uNNNN is that character whose character short name in ISO/IEC 10646 is 0000NNNN. If the hexadecimal value for a universal-character-name corresponds to a surrogate code point (in the range 0xD800–0xDFFF, inclusive), the program is ill-formed.
The ISO/IEC 10646 standard originated before Unicode and defined the Universal Character Set (UCS). It assigned code points to characters and specified how those code points should be encoded. The Unicode Consortium and the ISO group then joined forces to work on Unicode. The Unicode standard specifies much more than ISO/IEC 10646 does (algorithms, functional character specifications, etc.) but both standards are now kept in sync.
So you can think of the NNNN or NNNNNNNN as the Unicode code point for that character.
As an example, consider a line in your source file containing this:
const char* str = "Hellô";
Since ô is not in the basic source character set, that line is internally translated to:
const char* str = "Hell\u00F4";
This will give the same result.
There are only certain parts of your code that a universal-character-name is permitted:
In string literals
In character literals
In identifiers (however, this is not very well supported)
But I don't get how to obtain this NNNN or NNNNNNNN. So this is my question: how to do conversion?
The mapping is implementation-defined (e.g. §2.3 footnote 14). For instance if I save the following file as Latin-1:
#include <iostream>
int main() {
std::cout << "Hallö\n";
}
And compile it with g++ on OS X, I get the following output after running it:
Hell�
… but if I had saved it as UTF-8 I would have gotten this:
Hellö
Because GCC assumes UTF-8 as the input encoding on my system.
Other compilers may perform different mappings.
So, if your file is called Hello°¶.c, the compile would, when using that name internally, e.g. if we do:
cout << __FILE__ << endl;
the compiler would translate Hello°¶.c to Hello\u00b0\u00b6.c.
However, when I just tried this with g++ it doesn't do that...
But the assembler output contains:
.string "Hello\302\260\302\266.c"

Character sets - Not clear

The standard defines
basic source character set
basic execution character set and its wide char counterpart
It also defines 'execution character set' and its wide char counterpart as follows
$2.2/3- "The execution character set
and the execution wide-character set
are supersets of the basic execution
character set and the basic execution
wide-character set, respectively. The
values of the members of the execution
character sets are
implementation-defined, and any
additional members are
locale-specific."
Q1. I don't think I understand this completely, particularly the last statement. Any pointers on this aspect?
Further,
$3.9.1 - "Objects declared as
characters (char) shall be large
enough to store any member of the
implementation’s basic character set."
Q2. In 3.9.1 the phrase 'basic character set' means 'basic execution character set'?
You need do distinguish between the source character set, the execution character set, the wire execution character set and it's basic versions:
The basic source character set:
§2.1.1: The basic source character set consists of 96 characters […]
This character set has exactly 96 characters. They fit into 7 bit. Characters like # are not included.
Let's get some example binary representations for a few basic source characters. They can be completely arbitrary and there is no need these correspond to ASCII values.
A -> 0000000
B -> 0100100
C -> 0011101
The basic execution character set …
§2.1.3: The basic execution character set and the basic execution wide-character set shall each contain all the members of the basic source character set, plus control characters representing alert, backspace, and carriage return, plus a null character (respectively, null wide character), whose representation has all zero bits.
As stated the basic execution character set contains all members of basic source character set. It still doesn't include any other character like #. The basic execution character set can have a different binary representation.
As stated the basic execution character set contains representations for carriage return, a null character and other characters.
A -> 10110101010
B -> 00001000101 <- basic source character set
C -> 10101011111
----------------------------------------------------------
null -> 00000000000
Backspace -> 11111100011
If the basic execution character set is 11 bits long (like in this example) the char data type shall be large enough to store 11 bits but it may be longer.
… and The basic execution wide character set:
The basic execution wide character is used for wide characters (wchar_t). It basicallly the same as the basic execution wide character set but can have different binary representations as well.
A -> 1011010101010110101010
B -> 0000100010110101011111 <- basic source character set
C -> 1010100101101000011011
---------------------------------------------------------------------
null -> 0000000000000000000000
Backspace -> 1111110001100000000001
The only fixed member is the null character which needs to be a sequence of 0 bits.
Converting between basic character sets:
§2.1.1.5: Each source character set member, escape sequence, or universal-character-name in character literals and string literals is converted to a member of the execution character set (2.13.2, 2.13.4).
Then a c++ source file is compiled each character of the source character set is converted into the basic execution (wide) character set.
Example:
const char* string0 = "BA\bC";
const wchar_t string1 = L"BA\bC";
Since string0 is a normal character it will be converted to the basic execution character set and string1 will be converted to the basic execution wide character set.
string0 -> 00001000101 10110101010 11111100011 10101011111
string1 -> 0000100010110101011111 1011010101010110101010 // continued
1111110001100000000001 1010100101101000011011
Something about file encodings:
There are several kind of file encodings. For example ASCII which is 7 bit long. Windows-1252 which is 8 bit long (known as ANSI).
ASCII doesn't contain non-English characters. ANSI contains some European characters like ä Ö ä Õ ø.
Newer file encodings like UTF-8 or UTF-32 can contain characters of any language. UTF-8 is characters are variable in length. UTF-32 are 32 bit characters long.
File enconding requirements:
Most compilers offer command line switch to specify the file encoding of the source file.
A c++ source file needs to be encoded in an file encoding which has a representation of the basic source character set. For example: The file encoding of the source file needs to have a representation of the ; character.
If you can type the character ; within the encoding chosen as the encoding of the source file that encoding is not suitable as a c++ source file encoding.
Non-basic character sets:
Characters not included in the basic source character set belong to the source character set. The source character set is equivalent to the file encoding.
For example: the # character is not include in the basic source character but it may be included in the source character set. The chosen file encoding of the input source file might contain a representation of #. If it doesn't contain a representation for # you can't use the character # within strings.
Characters not included in the basic (wide) character set belong to the execution (wide) character set.
Remember that the compiler converts the character from the source character set to the execution character set and the execution wide character set. Therefore there needs to be way how these characters can be converted.
For example: If you specify Windows-1252 as the encoding of the source character set and specify ASCII as the execution wide character set there is no way to convert this string:
const char* string0 = "string with European characters ö, Ä, ô, Ð.";
These characters can not be represented in ASCII.
Specifying character sets:
Here are some examples how to specify the character sets using gcc. The default values are included.
-finput-charset=UTF-8 <- source character set
-fexec-charset=UTF-8 <- execution character set
-fwide-exec-charset=UTF-32 <- execution wide character set
With UTF-8 and UTF-32 as default encoding c++ source files can contain strings with character of any language. UTF-8 characters can the converted both ways without problems.
The extended character set:
§1.1.3: multibyte character, a sequence of one or more bytes representing a member of the extended character set of either the source or the execution environment. The extended character set is a superset of the basic character set (2.2).
Multibyte character are longer than an entry of the normal characters. They contain an escape sequence marking them as multibyte character.
Multibyte characters are processed according the locale set in the user's runtime environment. These multibyte characters are converted at runtime to the encoding set in user's environment.