Which character set to choose when compiling a c++ dll - c++

Could someone give some info regarding the different character sets within visual studio's project properties sheets.
The options are:
None
Unicode
Multi byte
I would like to make an informed decision as to which to choose.
Thanks.

All new software should be Unicode enabled. For Windows apps that means the UTF-16 character set, and for pretty much everyone else UTF-8 is often the best choice. The other character set choices in Windows programming should only be used for compatibility with older apps. They do not support the same range of characters as Unicode.

Multibyte takes exactly 2 bytes per character, none exactly 1, unicode varies.
None is not good as it doesn't support non-latin symbols. It's very boring if some non-English user tries to input their name into edit box. Do not use none.
If you do not use custom computation of string lengths, from programmer's point of view multibyte and unicode do not differ as long as use use TEXT macro to wrap your string constants.
Some libraries explicitly require certain encoding (DirectShow etc.), just use what they want.

As Mr. Shiny recommended, Unicode is the right thing.
If you want to understand a bit more on what are the implications of that decision, take a look here: http://www.mihai-nita.net/article.php?artID=20050306b

Related

How to properly localize a cross platform program?

I am currently making a game engine that is eventually going to support all platforms. Currently I am working on the Windows support with the Win32 API. Reading the documentation, it suggests that I use wide strings/chars and the Unicode version of API functions so that my application can be localized. But if I use wide versions of everything (wcout wstring wchar_t etc.), I will have to make my entire game engine use those wide types. That also means that when working with other platforms, I will also have to use wide types, or I will have to convert between them.
My idea is that maybe my code will only be compiled with wide string types on Windows and be compiled with normal string types on other platforms perhaps with macro definitions. Is that the best option to do this? And how may I go about doing this?
I also don't really understand how unicode works in c++. If I set my system locale to English, then I will get a compiler warning from MSVC if I have any Chinese characters stored in a normal string type. However now I set my system locale to Chinese and enabled UTF-8 encoding, I get no compiler warnings if I store Unicode characters in normal strings. I also have no idea how unicode works on other platforms. Can somebody explain this for me?
Thanks in advance.

The proper way to handle Unicode with C++ in 2018?

I have tried searching stackoverflow to find an answer to this but the questions and answers I've found are around 10 years old and I can't seem to find consensus on the subject due to changes and possible progress.
There are several libraries that I know of outside of the stl that are supposed to handle unicode-
http://userguide.icu-project.org/
https://github.com/nemtrif/utfcpp
https://github.com/CaptainCrowbar/unicorn-lib
There are a few features of the stl (wstring,codecvt_utf8) that were included but people seem to be ambivalent about using because they deal with UTF-16 which this site: (utf-8 everywhere) says shouldn't be used and many people online seem agree with the premise.
The only thing I'm looking for is the ability to do 4 things with a unicode strings-
Read a string into memory
Search the string with regex using unicode or ascii, concatenate or do text replacement/formatting with it with either ascii+unicode numbers or characters.
Convert to ascii + the unicode number format for characters that don't fit in the ascii range.
Write a string to disk or send wherever.
From what I can tell icu handles this and more. What I would like to know is if there is a standard way of handling this on Linux, Windows, and MacOS.
Thank you for your time.
I will try to throw some ideas here:
most C++ programs/programmers just assume that a text is an almost opaque sequence of bytes. UTF-8 is probably guilty for that, and there is no surprise that many comments resume to: don't worry with Unicode, just process UTF-8 encoded strings
files only contains bytes. At a moment, if you try to internally process true Unicode code points, you will have to serialize that to bytes -> here again UTF-8 wins the point
as soon as you go out of the Basic Multilingual Plane (16 bits code points), things become more and more complex. The emoji is specifically awful to process: an emoji can be followed by a variation selector (U+FE0E VARIATION SELECTOR-15 (VS15) for text or U+FE0F VARIATION SELECTOR-16 (VS16) for emoji-style) to alter its display style, more or less the old i bs ^ that was used in 1970 ascii when one wanted to print î. That's not all, the characters U+1F3FB to U+1F3FF are use to provide a skin color for 102 human emoji spread across six blocks: Dingbats, Emoticons, Miscellaneous Symbols, Miscellaneous Symbols and Pictographs, Supplemental Symbols and Pictographs, and Transport and Map Symbols.
That simply means that up to 3 consecutive unicode code points can represent one single glyph... So the idea that one character is one char32_t is still an approximation
My conclusion is that Unicode is a complex thing, and really requires a dedicated library like ICU. You can try to use simple tools like the converters of the standard library when you only deal with the BMP, but full support is far beyond that.
BTW: even other languages like Python that pretend to have a native unicode support (which is IMHO far better than current C++ one) ofter fails on some part:
the tkinter GUI library cannot display any code point outside the BMP - while it is the standard IDLE Python tool
different modules or the standard library are dedicated to Unicode in addition to the core language support (codecs and unicodedata), and other modules are available in the Python Package Index like the emoji support because the standard library does not meet all needs
So support for Unicode is poor for more than 10 years, and I do not really hope that things will go much better in the next 10 years...

What are the disadvantages to not using Unicode in Windows?

What are the disadvantages to not using Unicode on Windows?
By Unicode, I mean WCHAR and the wide API functions. (CreateWindowW, MessageBoxW, and so on)
What problems could I run into by not using this?
Your code won't be able to deal correctly with characters outside the currently selected codepage when dealing with system APIs1.
Typical problems include unsupported characters being translated to question marks, inability to process text with special characters, in particular files with "strange characters" in their names/paths.
Also, several newer APIs are present only in the "wide" version.
Finally, each API call involving text will be marginally slower, since the "A" versions of APIs are normally just thin wrappers around the "W" APIs, that convert the parameters to UTF-16 on the fly - so, you have some overhead in respect to a "plain" W call.
Nothing stops you to work in a narrow-characters Unicode encoding (=>UTF-8) inside your application, but Windows "A" APIs don't speak UTF-8, so you'd have to convert to UTF-16 and call the W versions anyway.
I believe the gist of the original question was "should I compile all my Windows apps with "#define _UNICODE", and what's the down side if I don't?
My original reply was "Yeah, you should. We've moved 8-bit ASCII, and '_UNICODE' is a reasonable default for any modern Windows code."
For Windows, I still believe that's reasonably good advice. But I've deleted my original reply. Because I didn't realize until I re-read my own links how much "UTF-16 is quite a sad state of affairs" (as Matteo Italia eloquently put it).
For example:
http://utf8everywhere.org/
Microsoft has ... mistakenly used ‘Unicode’ and ‘widechar’ as
synonyms for ‘UCS-2’ and ‘UTF-16’. Furthermore, since UTF-8 cannot be
set as the encoding for narrow string WinAPI, one must compile her
code with _UNICODE rather than _MBCS. Windows C++ programmers are
educated that Unicode must be done with ‘widechars’. As a result of
this mess, they are now among the most confused ones about what is the
right thing to do about text.
I heartily recommend these three links:
The Absolute Minimum Every Software Developer Should Know about Unicode
Should UTF-16 Be Considered Harmful?
UTF-8 Everywhere
IMHO...

Is TCHAR still relevant?

I'm new to Windows programming and after reading the Petzold book I wonder:
is it still good practice to use the TCHAR type and the _T() function to declare strings or if I should just use the wchar_t and L"" strings in new code?
I will target only Windows 2000 and up and my code will be i18n from the start up.
The short answer: NO.
Like all the others already wrote, a lot of programmers still use TCHARs and the corresponding functions. In my humble opinion the whole concept was a bad idea. UTF-16 string processing is a lot different than simple ASCII/MBCS string processing. If you use the same algorithms/functions with both of them (this is what the TCHAR idea is based on!), you get very bad performance on the UTF-16 version if you are doing a little bit more than simple string concatenation (like parsing etc.). The main reason are Surrogates.
With the sole exception when you really have to compile your application for a system which doesn't support Unicode I see no reason to use this baggage from the past in a new application.
I have to agree with Sascha. The underlying premise of TCHAR / _T() / etc. is that you can write an "ANSI"-based application and then magically give it Unicode support by defining a macro. But this is based on several bad assumptions:
That you actively build both MBCS and Unicode versions of your software
Otherwise, you will slip up and use ordinary char* strings in many places.
That you don't use non-ASCII backslash escapes in _T("...") literals
Unless your "ANSI" encoding happens to be ISO-8859-1, the resulting char* and wchar_t* literals won't represent the same characters.
That UTF-16 strings are used just like "ANSI" strings
They're not. Unicode introduces several concepts that don't exist in most legacy character encodings. Surrogates. Combining characters. Normalization. Conditional and language-sensitive casing rules.
And perhaps most importantly, the fact that UTF-16 is rarely saved on disk or sent over the Internet: UTF-8 tends to be preferred for external representation.
That your application doesn't use the Internet
(Now, this may be a valid assumption for your software, but...)
The web runs on UTF-8 and a plethora of rarer encodings. The TCHAR concept only recognizes two: "ANSI" (which can't be UTF-8) and "Unicode" (UTF-16). It may be useful for making your Windows API calls Unicode-aware, but it's damned useless for making your web and e-mail apps Unicode-aware.
That you use no non-Microsoft libraries
Nobody else uses TCHAR. Poco uses std::string and UTF-8. SQLite has UTF-8 and UTF-16 versions of its API, but no TCHAR. TCHAR isn't even in the standard library, so no std::tcout unless you want to define it yourself.
What I recommend instead of TCHAR
Forget that "ANSI" encodings exist, except for when you need to read a file that isn't valid UTF-8. Forget about TCHAR too. Always call the "W" version of Windows API functions. #define _UNICODE just to make sure you don't accidentally call an "A" function.
Always use UTF encodings for strings: UTF-8 for char strings and UTF-16 (on Windows) or UTF-32 (on Unix-like systems) for wchar_t strings. typedef UTF16 and UTF32 character types to avoid platform differences.
If you're wondering if it's still in practice, then yes - it is still used quite a bit. No one will look at your code funny if it uses TCHAR and _T(""). The project I'm working on now is converting from ANSI to unicode - and we're going the portable (TCHAR) route.
However...
My vote would be to forget all the ANSI/UNICODE portable macros (TCHAR, _T(""), and all the _tXXXXXX calls, etc...) and just assume unicode everywhere. I really don't see the point of being portable if you'll never need an ANSI version. I would use all the wide character functions and types directly. Preprend all string literals with a L.
I would still use the TCHAR syntax if I was doing a new project today. There's not much practical difference between using it and the WCHAR syntax, and I prefer code which is explicit in what the character type is. Since most API functions and helper objects take/use TCHAR types (e.g.: CString), it just makes sense to use it. Plus it gives you flexibility if you decide to use the code in an ASCII app at some point, or if Windows ever evolves to Unicode32, etc.
If you decide to go the WCHAR route, I would be explicit about it. That is, use CStringW instead of CString, and casting macros when converting to TCHAR (eg: CW2CT).
That's my opinion, anyway.
I would like to suggest a different approach (neither of the two).
To summarize, use char* and std::string, assuming UTF-8 encoding, and do the conversions to UTF-16 only when wrapping API functions.
More information and justification for this approach in Windows programs can be found in http://www.utf8everywhere.org.
The Introduction to Windows Programming article on MSDN says
New applications should always call the Unicode versions (of the API).
The TEXT and TCHAR macros are less useful today, because all applications should use Unicode.
I would stick to wchar_t and L"".
TCHAR/WCHAR might be enough for some legacy projects. But for new applications, I would say NO.
All these TCHAR/WCHAR stuff are there because of historical reasons. TCHAR provides a seemly neat way (disguise) to switch between ANSI text encoding (MBCS) and Unicode text encoding (UTF-16). In the past, people did not have an understanding of the number of characters of all the languages in the world. They assumed 2 bytes were enough to represent all characters and thus having a fixed-length character encoding scheme using WCHAR. However, this is no longer true after the release of Unicode 2.0 in 1996.
That is to say:
No matter which you use in CHAR/WCHAR/TCHAR, the text processing part in your program should be able to handle variable length characters for internationalization.
So you actually need to do more than choosing one from CHAR/WCHAR/TCHAR for programming in Windows:
If your application is small and does not involve text processing (i.e. just passing around the text string as arguments), then stick with WCHAR. Since it is easier this way to work with WinAPI with Unicode support.
Otherwise, I would suggest using UTF-8 as internal encoding and store texts in char strings or std::string. And covert them to UTF-16 when calling WinAPI. UTF-8 is now the dominant encoding and there are lots of handy libraries and tools to process UTF-8 strings.
Check out this wonderful website for more in-depth reading:
http://utf8everywhere.org/
Yes, absolutely; at least for the _T macro. I'm not so sure about the wide-character stuff, though.
The reason being is to better support WinCE or other non-standard Windows platforms. If you're 100% certain that your code will remain on NT, then you can probably just use regular C-string declarations. However, it's best to tend towards the more flexible approach, as it's much easier to #define that macro away on a non-windows platform in comparison to going through thousands of lines of code and adding it everywhere in case you need to port some library to windows mobile.
IMHO, if there's TCHARs in your code, you're working at the wrong level of abstraction.
Use whatever string type is most convenient for you when dealing with text processing - this will hopefully be something supporting unicode, but that's up to you. Do conversion at OS API boundaries as necessary.
When dealing with file paths, whip up your own custom type instead of using strings. This will allow you OS-independent path separators, will give you an easier interface to code against than manual string concatenation and splitting, and will be a lot easier to adapt to different OSes (ansi, ucs-2, utf-8, whatever).
The only reasons I see to use anything other than the explicit WCHAR are portability and efficiency.
If you want to make your final executable as small as possible use char.
If you don't care about RAM usage and want internationalization to be as easy as simple translation, use WCHAR.
If you want to make your code flexible, use TCHAR.
If you only plan on using the Latin characters, you might as well use the ASCII/MBCS strings so that your user does not need as much RAM.
For people who are "i18n from the start up", save yourself the source code space and simply use all of the Unicode functions.
TCHAR is not relevant anymore, since now we have UNICODE. You should use UTF-16 wchar_t* strings instead.
Windows APIs takes wchar_t* as strings, and it is UTF-16.
Just adding to an old question:
NO
Go start a new CLR C++ project in VS2010. Microsoft themselves use L"Hello World", 'nuff said.
TCHAR have a new meaning to port from WCHAR to CHAR.
https://learn.microsoft.com/en-us/windows/uwp/design/globalizing/use-utf8-code-page
Recent releases of Windows 10 have used the ANSI code page and -A
APIs as a means to introduce UTF-8 support to apps. If the ANSI code
page is configured for UTF-8, -A APIs operate in UTF-8.

How Do You Write Code That Is Safe for UTF-8?

We have a set of applications that were developed for the ASCII character set. Now, we're trying to install it in Iceland, and are running into problems where the Icelandic characters are getting screwed up.
We are working through our issues, but I was wondering: Is there a good "guide" out there for writing C++ code that is designed for 8-bit characters and which will work properly when UTF-8 data is given to it?
I can't expect everyone to read the whole Unicode standard, but if there is something more digestible available, I'd like to share it with the team so we don't run into these issues again.
Re-writing all the applications to use wchar_t or some other string representation is not feasible at this time. I'll also note that these applications communicate over networks with servers and devices that use 8-bit characters, so even if we did Unicode internally, we'd still have issues with translation at the boundaries. For the most part, these applications just pass data around; they don't "process" the text in any way other than copying it from place to place.
The operating systems used are Windows and Linux. We use std::string and plain-old C strings. (And don't ask me to defend any of the design decisions. I'm just trying to help fix the mess.)
Here is a list of what has been suggested:
The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)
UTF-8 and Unicode FAQ for Unix/Linux
The Unicode HOWTO
Just be 8-bit clean, for the most part. However, you will have to be aware that any non-ASCII character splits across multiple bytes, so you must take account of this if line-breaking or truncating text for display.
UTF-8 has the advantage that you can always tell where you are in a multi-byte character: if bit 7 is set and bit 6 reset (byte is 0x80-0xBF) this is a trailing byte, while if bits 7 and 6 are set and 5 is reset (0xC0-0xDF) it is a lead byte with one trailing byte; if 7, 6 and 5 are set and 4 is reset (0xE0-0xEF) it is a lead byte with two trailing bytes, and so on. The number of consecutive bits set at the most-significant bit is the total number of bytes making up the character. That is:
110x xxxx = two-byte character
1110 xxxx = three-byte character
1111 0xxx = four-byte character
etc
The Icelandic alphabet is all contained in ISO 8859-1 and hence Windows-1252. If this is a console-mode application, be aware that the console uses IBM codepages, so (depending on the system locale) it might display in 437, 850, or 861. Windows has no native display support for UTF-8; you must transform to UTF-16 and use Unicode APIs.
Calling SetConsoleCP and SetConsoleOutputCP, specifying codepage 1252, will help with your problem, if it is a console-mode application. Unfortunately the console font selected has to be a font that supports the codepage, and I can't see a way to set the font. The standard bitmap fonts only support the system default OEM codepage.
This looks like a comprehensive quick guide:
http://www.cl.cam.ac.uk/~mgk25/unicode.html
Be aware that full unicode doesn't fit in 16bit characters; so either use 32-bit chars, or variable-width encoding (UTF-8 is the most popular).
UTF-8 was designed exactly with your problems in mind. One thing I would be careful about is that ASCII is realy a 7-bit encoding, so if any part of your infrastructure is using the 8th bit for other purposes, that may be tricky.
You might want to check out icu. They might have functions available that would make working with UTF-8 strings easier.
Icelandic uses ISO Latin 1, so eight bits should be enough. We need more details to figure out what's happening.
Icelandic, like French, German, and most other languages of Western Europe, can be supported using an 8-bit character set (CP1252 on Windows, ISO 8859-1 aka Latin1 on *x). This was the standard approach before Unicode was invented, and is still quite common. As you say you have a constraint that you can't rewrite your app to use wchar, and you don't need to.
You shouldn't be surprised that UTF-8 is causing problems; UTF-8 encodes the non-ASCII characters (e.g. the accented Latin characters, thorn, eth, etc) as TWO BYTES each.
The only general advice that can be given is quite simple (in theory):
(1) decide what character set you are going to support (Unicode, Latin1, CP1252, ...) in your system
(2) if you are being supplied data encoded in some other fashion (e.g. UTF-8) then transcode it to your standard (e.g. CP1252) at the system border
(3) if you need to supply data encoded in some other fashion, ...
You may want to use wide characters (wchar_t instead of char and std::wstring instead of std::string). This doesn't automatically solve 100% of your problems, but is good first step.
Also use string functions which are Unicode-aware (refer to documentation). If something manipulates wide chars or string it generally is aware that they are wide.