Writing unicode(?) character directly from source code to WriteConsoleOutput - c++

I'm trying to use WriteConsoleOutput from the WinApi to write characters to the command prompt window buffer. The thing is, I'd really like to be able to write characters such as ☺ directly into the source code, as-is, instead of using some kind of encoding/notation like '\uFFFF' or '0xFF', since I don't understand them too well (differences between codepages/character sets/etc.)
The code below showcases the simplest form of my problem. Running this code does not print ☺ into the command prompt window, but a question mark (?) instead.
#include <Windows.h>
int main()
{
HANDLE h = GetStdHandle(STD_OUTPUT_HANDLE);
CHAR_INFO c[1] = {0};
COORD cS = {1, 1};
COORD cH = {0, 0};
SMALL_RECT sr = {0, 0, 0, 0};
c[0].Attributes = FOREGROUND_INTENSITY;
c[0].Char.UnicodeChar = '☺';
WriteConsoleOutput(h, c, cS, cH, &sr);
Sleep(5000);
return 0;
}
It is vital for my code to display output identically between all Windows versions, regardless of the languages installed/used. So to my knowledge (which admittedly is absolutely minimal), I'd need to set a specific codepage (one which would hopefully be supported by the command prompt in any language Windows).
I've tried:
• Changing from using the CHAR_INFO.UnicodeChar to CHAR_INFO.AsciiChar
• Fiddling around with SetConsoleCP and SetConsoleOutputCP functions, but I haven't got a clue on how to utilize them to help me with this problem.
• Changing the Visual Studio -> Project -> Project properties.. -> Character Set setting to every possible value.
• Using specifically either WriteConsoleOutputA or WriteConsoleOutputW in addition to the aforementioned settings
• Changing the source code file encoding to UTF-8 with(/out) signature.
In my project I'm programmatically setting the command prompt font to 8x8 Terminal, which to my knowledge does not support actual unicode characters. The available characters are displayed here. Those characters do include '☺', so I'm not entirely sure my question is about unicode. I have no idea anymore. Please help.

C source has to be ascii only. If you embed non-ascii characters in a C source file, and IDE might show them in what appears to be the correct format, but the compiler quite likely treats them differently, and the executable function you pass them to can treat them differently still. It's just not portable or reliable. But you can use the escape sequence \x to embed arbitrary bytes in C strings.
UTF-8 is good for internal use, but Windows APIs don't yet support it, so you need to convert to Windows 16 bit chars (UTF-16 nearly but not quite), to display extended characters. However you have to ensure that you are calling the wide character version of the Windows API. Most Windows API functions that take string come in a A and W version (ascii and wide) for binary backwards compatibility. If you query the identifier in the IDE (go to definition etc) you should see which version you have.

Related

Can't display 'ä' in GLFW window title

glfwSetWindowTitle(win, "Nämen");
Becomes "N?men", where '?' is in a little black, twisted square, indicating that the character could not be displayed.
How do I display 'ä'?
If you want to use non-ASCII letters in the window title, then the string has to be utf-8 encoded.
GLFW: Window title:
The window title is a regular C string using the UTF-8 encoding. This means for example that, as long as your source file is encoded as UTF-8, you can use any Unicode characters.
If you see a little black, twisted square then this indicates that the ä is encoded with some iso encoding that is not UTF-8, maybe something like latin1. To fix this you need to open it in the editor in which you can change the encoding of the file, change it to uft-8 (without BOM) and fix the ä in the title.
It seems like the GLFW implementation does not work according to the specification in this case. Probably the function still uses Latin-1 instead of UTF-8.
I had the same problem on GLFW 3.3 Windows 64 bit precompiled binaries and fixed it like this:
SetWindowTextA(glfwGetWin32Window(win),"Nämen")
The issue does not lie within GLFW but within the compiler. Encodings are handled by the major compilers as follows:
Good guy clang assumes that every file is encoded in UTF-8
Trusty gcc checks the system's settings1 and falls back on UTF-8, when it fails to determine one.
MSVC checks for BOM and uses the detected encoding; otherwise it assumes that file is encoded using the users current code page2.
You can determine your current code page by simply running chcp in your (Windows) console or PowerShell. For me, on a fresh install of Windows 11, it yields "850" (language: English; keyboard: German), which stands for Code Page 850.
To fix this issue you have several solutions:
Change your systems code page. Arguably the worst solution.
Prefix your strings with u8, escape all unicode literals and convert the string to wide char before passing Win32 functions; e.g.:
const char* title = u8"\u0421\u043b\u0430\u0432\u0430\u0020\u0423\u043a\u0440\u0430\u0457\u043d\u0456\u0021";
// This conversion is actually performed by GLFW; see footnote ^3
const int l = MultiByteToWideChar(CP_UTF8, 0, title, -1, NULL, 0);
wchar_t* buf = _malloca(l * sizeof(wchar_t));
MultiByteToWideChar(CP_UTF8, 0, title, -1, buf, l);
SetWindowTextW(hWnd, buf);
_freea(buf);
Save your source files with UTF-8 encoding WITH BOM. This allows you to write your strings without having to escape them. You'll still need to convert the string to a wide char string using the method seen above.
Specify the /utf-8 flag when compiling; this has the same effect as the previous solution, but you don't need the BOM anymore.
The solutions stated above still require you convert your good and nice string to a big chunky wide string.
Another option would be to provide a manifest4 with the activeCodePage set to UTF-8. This way all Win32 functions with the A-suffix (e.g. SetWindowTextA) now accept and properly handle UTF-8 strings, if the running system is at least or newer than Windows Version 1903.
TL;DR
Compile your application with the /utf-8 flag active.
IMPORTANT: This works for the Win32 APIs. This doesn't let you magically write Unicode emojis to the console like a hipster JS developer.
1 I suppose it reads the LC_ALL setting on linux. In the last 6 years I have never seen a distribution, that does NOT specify UTF-8. However, take this information with a grain of salt; I might be entirely wrong on how gcc handles this now.
2 If no byte-order mark is found, it assumes that the source file is encoded in the current user code page [...].
3 GLFW performs the conversion as seen here.
4 More about Win32 ANSI-APIs, Manifest and UTF-8 can be found here.

c++ Lithuanian language, how to get more than ascii

I am trying to use Lithuanian in my c++ application, but every try is unsuccesfull.
Multi-byte character set is used. I have tryed everything i have tought of, i am new in c++. Never ever tryed to do something in Lithuanian.
Tryed every setlocale(LC_ALL, "en_US.utf8"); setlocale(LC_ALL, "Lithuanian");...
Researched for 2 hours and didnt found proper examples, solution.
I do have a average sized project which needs Lithuanian translation from database and it cant understand most of "ĄČĘĖĮŠŲŪąčęėįšųū".
Compiler - "Visual studio 2013"
Database - sqlite3.
I cant get simple strings to work(defined myself), and output as Lithuanian to win32 application, even.
In Windows use wide character strings (1UTF-16 encoding, wchar_t type) for internal text handling, and preferably UTF-8 for external text files and networking.
Note that Visual C++ will translate narrow text literals from the source encoding to Windows ANSI, which is a platform-dependent usually single-byte encoding (you can check which one via the GetACP API function), i.e., Visual C++ has the platform-specific Windows ANSI as its narrow C++ execution character set.
But also do note that for an app restricted to non-Windows platforms, i.e. Unix-land, it makes practical sense to do everything in UTF-8, based on char type.
For the database communication you may need to translate to and from the program's internal text representation.
This depends on what the database interface requires, which is not stated.
Example for console output in Windows:
#include <iostream>
#include <fcntl.h>
#include <io.h>
auto main() -> int
{
_setmode( _fileno( stdout ), _O_WTEXT );
using namespace std;
wcout << L"ĄČĘĖĮŠŲŪąčęėįšųū" << endl;
}
To make this compile by default with g++, the source code encoding needs to be UTF-8. Then, to make it produce correct results with Visual C++ the source code encoding needs to be UTF-8 with BOM, which happily is also accepted by modern versions of g++. For otherwise the Visual C++ compiler will assume the Windows ANSI encoding and produce an incorrect UTF-16 string.
Not coincidentally this is the default meaning of UTF-8 in Windows, e.g. in the Notepad editor, namely UTF-8 with BOM.
But note that while in Windows the problem is that the main system compiler requires a BOM for UTF-8, in Unix-land the problem is the opposite, that many old tools can't handle the BOM (for example, even MinGW g++ 4.9.1 isn't yet entirely up to speed: it sometimes includes the BOM bytes, then incorrectly interpreted, in error messages).
1) On other platforms wide character text can be encoded in other ways, e.g. with UTF-32. In fact the Windows convention is in direct conflict with the C and C++ standards which require that a single wchar_t should be able to encode any character in the extended character set. However, this requirement was, AFAIK, imposed after Windows adopted UTF-16, so the fault probably lies with the politics of the C and C++ standardization process, not yet another Microsoft'ism.
Complexity of internationalisation
There are several related but distinct topics that can cause mismatches between them, making try and error approach very tedious:
type used for storing strings and chars: windows iuses wchar_t by default, but for most APIs you have also char equivalents functions
character set encoding this defines how the chars stored in the type are to be understood. For exemple unicode (UTF8, UTF16, UTF32), 7 bits ascii, 8 bit ansii. In windows, by default it is UTF16 for wchar_t and ansi/windows for char
locale defines, among other things, the character set asumptions, when processing strings. This permit to use language independent functions like isalpha(i, loc), islower(i, loc), ispunct(i, loc) to find out if a given character is alphanumeric, a lower case alphabetic, or a punctuation, for example to bereak down a user text into words. C++ offers here portable functions.
output codepage or font used to show a character to the user. This assumes that the font used shows the characters using the same character set used in the code internals.
source code encoding. For example your editor could assume an ansi encoding, with windows 1252 character set.
Most typical errors
The problem n°1 is Win32 console output, as unicode is not well supported by the console. But this is not your problem here.
Another cause of mismatch is the encoding of your text editor. It might not be unicode, but use a windows code page. In this case, you type "Č", the deditor displays it as such, but editor might use windows 1257 encoding for lithuanian and store 0xC8 in the file. If you then display this literal with a windows unicode function, it will interpret 0xC8 as "latin E grave accent" and print something else, as the right unicode encoding for "Č" is 0x010C !
I can be even worse: the compiler may have its own assumption about character set encoding used and convert your litterals into unicode using false assumptions (it happened to me when I used some exotic code generation switch).
How to do ?
To figure out what goes wront, proceed by elimination:
First, for plain windows, use the native unicode setting. Ok it's UTF16 and wchar_t instead of UTF8 and as thus comes with some drawbacks, but it's native and well supported.
Then use explict unicode coding in litterals, for example TEXT("\u010C") instead of TEXT("Č"). This avoids editor and compiler mismatch.
If it's still not the right character, make sure that your font FULLY supports unicode. The default system font for instance doesn't while most other do. You can easily check with the windows font pannel (WindowKey+R fonts then click on "search char") to display the character table of your font.
Set fonts explicitely in your code
For example, a very tiny experiment :
...
case WM_PAINT:
{
hdc = BeginPaint(hWnd, &ps);
auto hf = CreateFont(24, 0, 0, 0, 0, TRUE, 0, 0, 0, 0, 0, 0, 0, L"Times New Roman");
auto hfOld = SelectObject(hdc, hf); // if you comment this out, € and Č won't display
TextOut(hdc, 50, 50, L"Test with éç € \u010C special chars", 30);
SelectObject(hdc, hfOld);
DeleteObject(hf);
EndPaint(hWnd, &ps);
break;
}

Printing out Korean in console C++

I am having trouble with printing out korean.
I have tried various methods with no avail.
I have tried
1.
cout << "한글" << endl;
2.
wcout << "한글" << endl;
3.
wprintf(L"한글\n");
4.
setlocale(LC_ALL, "korean");
wprintf("한글");
and more. But all of those prints "한글".
I am using MinGW compiler, and my OS is windows 7.
P.S Strangely Java prints out Korean fine,
String kor = "한글";
System.out.println(kor);
works.
Set the console codepage to utf-8 before printing the text
::SetConsoleOutputCP(65001)
Since you are using Windows 7 you can use WriteConsoleW which is part of the windows API. #include <windows.h> and try the following code:
DWORD numCharsToWrite = str.length();
LPDWORD numCharsWritten = NULL;
WriteConsoleW(GetStdHandle(STD_OUTPUT_HANDLE), str.c_str(), numCharsToWrite, numCharsWritten, NULL);
where str is the is a std::wstring
More on WriteConsoleW: https://msdn.microsoft.com/en-us/library/windows/desktop/ms687401%28v=vs.85%29.aspx
After having tried other methods this worked for me.
Problem is that there are a lot of places where this could be broken.
Here is answer I've posted some time ago (covers Korean). Answear is for MSVC, but same applies to MinGW (compiler switches are different, locale name may be different).
Here are 5 traps which makes this hard:
Source code encoding. Source has to use encoding which supports all required characters. Nowadays UTF-8 is recommended. It is best to make sure your editor (IDE) is properly configure to enforce source encoding.
You have to inform compiler what is encoding of source file. For gcc it is: -finput-charset=utf-8 (it is default)
Encoding used by executable. You have to define what kind of encoding string literals should be encode in final executable. This encoding should also cover required characters. Here UTF-8 is also the best. Gcc option is -fexec-charset=utf-8
When you run application you have to inform standard library what kind of encoding your string literals are define in or what encoding in program logic is used. So somewhere in your code at beginning of execution you need something like this (here UTF-8 is enforced):
std::locale::global(std::locale{".utf-8"});
and finally you have to instruct stream what kind of encoding it should use. So for std::cout and std::cin you should set locale which is default for the system:
auto streamLocale = std::locale{""};
// this impacts date/time/floating point formats, so you may want tweak it just to use sepecyfic encoding and use C-loclae for formating
std::cout.imbue(streamLocale);
std::cin.imbue(streamLocale);
After this everything should work as desired without code which explicitly does conversions.
Since there are 5 places to make mistake, this is reason people have trouble with it and internet is full of "hack" solutions.
Note that if system is not configured for support all needed characters (for example wrong code page is set) then with thsi configuration characters which could not be converted will be replaced with question mark.

Swedish characters don't compare correctly

For some reason If/else statements isn't working correctly for me in C++
The problem is that when a variabel is equal to the right (höger), it won't output the If statement, instead it will go on to the else statement. If I replace the letter 'ö' with say 'o' so it becomes 'hoger' instead, then the if statement will work. So whenever I write the word 'höger' it won't go to the if statement, instead it will go to the else statement. However if I make the variabel equal to 'hoger', and then I write 'hoger', it will work. How can I make it possible writing 'höger' were the If statement recognizes it instead? It's as if Swedish letters don't work.
My code look like this:
#include <iostream>
#include <string>
using namespace std;
int main() {
setlocale(LC_ALL,"");
string test; // Define variabel
cout << " Höger elle vänster"<<endl; // Right or left
cin >> test;
if(test == "höger") { // If right, then output this.
cout <<"Du valde höger"<<endl;
}
else if(test == "vänster") { // If left, then output this
cout <<"Du valde vänster"<<endl;
} else {
// Do this
}
}
The problem is almost certainly to do with encodings.
The C/C++ language specs do not automatically handle anything other than 7 bit ASCII. The o-umlaut character is outside that range, and the exact behaviour depends on the encoding of your source code file.
The most likely possibilities are ISO 8859-1, Windows ANSI-1252, UTF-8 or Windows OEM 850. The first two encode this character the same, but in each of the others it is different.
With a bit more information about the encoding and tool set you are using it may be possible to provide more specific diagnosis and advice.
[And by the way, if/else statements in C/C++ work just fine, thank you.]
If we assume for the moment that this is Windows and Visual C++, then this is what you're dealing with.
Source code written inside Visual Studio: code page 1252. Code point for the o-umlaut character is 0xf6.
Keyboard input read from the console: code page 850. Code point for the o-umlaut character is 0x94.
Obviously not a good match. However, Visual Studio can also quite happily edit source code files in many encodings including UTF-8 (with byte mark), UTF-16 (wide characters) and code page 850. So:
Source code written inside Visual Studio: code page 850. Code point for the o-umlaut character is 0x94. Now it works.
You can also change the code page for your console using the CHCP command.
Change Console to CHCP 1252 and it works.
The behaviour of the compiler when reading source code is obliged by the standard to be consistent with the execution character set. See n3797 S2.2.5:
Each source character set member in a character literal or a string literal, as well as each escape
sequence and universal-character-name in a character literal or a non-raw string literal, is converted to the corresponding member of the execution character set
S2.3/3:
The basic execution character set and the basic execution wide-character set shall each contain all the members of the basic source character set, plus control characters representing alert, backspace, and carriage return, plus a null character (respectively, null wide character), whose representation has all zero bits. For each basic execution character set, the values of the members shall be non-negative and distinct from one another. In both the source and execution basic character sets, the value of each character after 0 in the above list of decimal digits shall be one greater than the value of the previous. The execution character set and the execution wide-character set are implementation-defined supersets of the basic execution character
set and the basic execution wide-character set, respectively. The values of the members of the execution character sets and the sets of additional members are locale-specific.
n3797 S2.14.3/1:
A character literal that does not begin with u, U, or L is an ordinary character literal, also referred to as a narrow-character literal. An ordinary character literal that contains a single c-char representable in the execution character set has type char, with value equal to the numerical value of the encoding of the c-char in the execution character set.
n3297 S2.14.5/6:
a string literal that does not begin with an encoding-prefix is an ordinary string
literal, and is initialized with the given characters.
The execution character set is implementation-defined. Microsoft's statement reqarding implementation-defined behaviour for the C compiler is here: http://msdn.microsoft.com/en-us/library/hx3yt8af.aspx. [I can't find a separate one for C++, so I assume this applies to both.]
The source character set is the set of legal characters that can appear in source files. For Microsoft C, the source character set is the standard ASCII character set.
Sorry about the language-lawyer stuff, but what this says is that the MSVC compiler is independent of locale/encoding and implements 8-bit ASCII, code page unspecified. Obviously the standard library functions may need to know the encoding for various purposes, but that is a whole other story.
As a final point, the Microsoft C compiler dates back around 30 years, since before Windows. It has always been possible to write source code in code page 850 and have it run correctly on the console, subject to careful handling of extended (8-bit) characters. Many people still do. The problem here source code written in Windows-Ansi or Unicode and keyboard input from a OEM (cp850) console. Change either one to get it to work correctly.
In practice this problem will only manifest itself in Windows, so I'll assume Windows.
Then the problem is that the C++ narrow extended execution character set(1) (encoding) does not match the encoding used by the console window. "Narrow" refers to the char type. "Excecution character set" is a formal term employed by the C++ standard, and refers to the encoding that is assumed for text stored in the executable. The compiler translates source code literals to this encoding. It's also assumed for translation to/from any external encoding, such as translation to/from a console's encoding.
With Visual C++ the narrow encoding is always Windows ANSI(2), regardless of source code encoding, unless you trick the compiler. And assuming you're using Visual C++, this is then one encoding that you know.
The encoding in the console window is by default the one used for original IBM PC, in your case probably codepage 850 (a Western European variant of the original IBM PC English codepage 437). Run the Windows command interpreter cmd (Windows-key+R, type cmd, OK). Type chcp to check the current codepage. Type chcp 1252 to switch to Windows ANSI Western, which presumably is the Windows ANSI codepage on your machine. Run your program [.exe] file, e.g. by typing its full path, or by going to its directory and typing just its name, e.g.
[H:\dev\test\0046]
> cl /nologo /EHsc /GR encoding.cpp /Fe:b.exe
encoding.cpp
[H:\dev\test\0046]
> chcp & b
Active code page: 850
Höger elle vänster
höger
← No output here, didn't compare as equal.
[H:\dev\test\0046]
> chcp 1252
Active code page: 1252
[H:\dev\test\0046]
> b
Höger elle vänster
höger
Du valde höger
[H:\dev\test\0046]
> _
… where cl (short for original “Lattice C”) is the Visual C++ compiler.
You can change the console codepage more permanently by running regedit, going to this registry key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Nls\CodePage
and in the list in the right pane double-click the value named OEMCP (short for Original Equipment Manufacturer Code Page, referring to the IBM PC), change it to 1252, or more generally to the same value as the ACP value, and reboot the machine.
Oh, it's also necessary to change the console window font to a TrueType font such as Lucida Console, because the default is (an emulation of) a bitmapped font that only works correctly with the original console codepage. You can right click the console window title to get a menu, choose [Defaults], and configure the default font, size, colors etc. The changes won't affect the current console window, but they will apply to future console windows, except for those that have been configured individually(3).
An alternative to such console window configuration is to use the Console2 program. If you do, then in Windows 7 and later be sure to use the 64-bit version. Otherwise some things, such as invoking links to 64-bit programs, won't work.
Summing up, you can either
run the program from the command interpreter (using chcp to change the codepage), or
change the console codepage more permanently, as discussed above.
In either case it's a Good Idea™ to change the console window font to a TrueType font – and yes, this affects the functionality, not just the looks.
Note on additional Microsoft absurdity: in Windows 7 and later the "System" font used by default in console windows is actually, behind the scenes, a TrueType font with umpteen thousand glyphs, but it's used to emulate the old 16-bit Windows bitmapped fonts, with the same silly restrictions, so that you still have to change to some other TrueType font…
(1) See the C++11 standard §2.3/3.
(2) “Windows ANSI” depends on the Windows configuration and is always the codepage specified by the GetACP API function. In practice this function gets its value from the registry key/value referenced above. However, that's largely undocumented.
(3) In Windows XP Windows would ask if you wanted to save an individual console window configuration. Starting with Windows Vista it's saved with no question asked and no information that it's been saved. There is no user interface for removing such saved configurations, but they can be removed by programmatically altering shortcut files, and/or by registry editing, which however is both an impractical and brittle solution.
The only change I made to your code was the following:
// setlocale(LC_ALL, "");
char *l = setlocale(LC_ALL, NULL);
cout << "Current Locale: " << l << endl;
Because I don't have a “ISO” keyboard layout, I used the Alt code to type the character I need. The following the key combination I used for the different code pages.
First run I had to type in Alt+246 for Code page 437
Second run, Alt+148 for Windows-1252
Below is the output when I change code page between execution
It seems the problem is the encoding of your source file when your IDE compiles it. If you are using Visual Studio you can change your encoding setting like this:

Why Non-Unicode apps system locale makes Unicode fonts with symbol charset displayed incorrectly?

I'm trying to display Unicode chars from Wingdings font (it's Unicode TrueType font supporting symbol charset only).
It's displayed correctly on my Win7/64 system using corresponding regional OS settings:
Formats: Russian
Location: Russia
System locale (AKA Language for Non-Unicode applications): English
But if I switch System locale to Russian, Unicode characters with codes > 127 are displayed incorrectly (replaced with boxes).
My application is created as using Unicode Charset in Visual Studio, it calls only Unicode Windows API functions.
Also I noted that several Windows apps also display such chars incorrectly with symbol fonts (Symbol, Wingdings, Webdings etc), e.g. Notepad, Beyond Compare 3. But WordPad and MS Office apps aren't affected.
Here is minimal code snippet (resources cleanup skipped for brevity):
LOGFONTW lf = { 0 };
lf.lfCharSet = SYMBOL_CHARSET;
lf.lfHeight = 50;
wcscpy_s(lf.lfFaceName, L"Wingdings");
HFONT f = CreateFontIndirectW(&lf);
SelectObject(hdc, f);
// First two chars displayed OK, 3rd and 4th aren't (replaced with boxes) if
// Non-Unicode apps language is NOT English.
TextOutW(hdc, 10, 10, L"\x7d\x7e\x81\xfc");
So the question is: why the hell Non-Unicode apps language setting affects Unicode apps?
And what is the correct (and most simple) way to display SYMBOL_CHARSET fonts without dependency to OS system locale?
The root cause of the problem is that Wingdings font is actually non-Unicode font. It supports Unicode partially, so some symbols are still displayed correctly. See #Adrian McCarthy's answer for details about how it's probably works under the hood.
Also see more info here: http://www.fileformat.info/info/unicode/font/wingdings
and here: http://www.alanwood.net/demos/wingdings.html
So what can we do to avoid such problems? I found several ways:
1. Quick & dirty
Fall back to ANSI version of API, as #user1793036 suggested:
TextOutA(hdc, 10, 10, "\x7d\x7e\x81\xfc"); // Displayed correctly!
2. Quick & clean
Use special Unicode range F0 (Private Use Area) instead of ASCII character codes. It's supported by Wingdings:
TextOutW(hdc, 10, 10, L"\xf07d\xf07e\xf081\xf0fc"); // Displayed correctly!
To explore which Unicode symbols are actually supported by font some font viewer can be used, e.g. dp4 Font Viewer
3. Slow & clean, but generic
But what to do if you don't know which characters you have to display and which font actually will be used? Here is most universal solution - draw text by glyphs to avoid any undesired translations:
void TextOutByGlyphs(HDC hdc, int x, int y, const CStringW& text)
{
CStringW glyphs;
GCP_RESULTSW gcpRes = {0};
gcpRes.lStructSize = sizeof(GCP_RESULTS);
gcpRes.lpGlyphs = glyphs.GetBuffer(text.GetLength());
gcpRes.nGlyphs = text.GetLength();
const DWORD flags = GetFontLanguageInfo(hdc) & FLI_MASK;
GetCharacterPlacementW(hdc, text.GetString(), text.GetLength(), 0,
&gcpRes, flags);
glyphs.ReleaseBuffer(gcpRes.nGlyphs);
ExtTextOutW(hdc, x, y, ETO_GLYPH_INDEX, NULL, glyphs.GetString(),
glyphs.GetLength(), NULL);
}
TextOutByGlyphs(hdc, 10, 10, L"\x7d\x7e\x81\xfc"); // Displayed correctly!
Note GetCharacterPlacementW() function usage. For some unknown reason similar function GetGlyphIndicesW() would not work returning 'unsupported' dummy values for chars > 127.
Here's what I think is happening:
The Wingdings font doesn't have Unicode mappings (a cmap table?). (You can see this by using charmap.exe: the Character set drop down control is grayed out.)
For fonts without Unicode mappings, I think Windows assumes that it depends on the "Language for Non-Unicode applications" setting.
When that's English, Windows (probably) uses code page 1252, and all the values map to themselves.
When that's Russian, Windows (probably) uses code page 1251, and then tries to remap them.
The '\x81' value in code page 1251 maps to U+0403, which obviously doesn't exist in the font, so you get a box. Similarly the, '\xFC' maps to U+044C.
I assumed that if you used ExtTextOutW with the ETO_GLYPH_INDEX flag, Windows wouldn't try to interpret the values at all and just treat them as glyph indexes into the font. But that assumption is wrong.
However, there is another flag called ETO_IGNORELANGUAGE, which is reserved, but, empirically, it seems to solve the problem.