How to arrange correct processing of Unicode strings using pure C++?
What I mean is, when you put your unicode string into std::string and count its length, sometimes you get like 10 characters for 5-chars-long string.
How do they do it in serious open-source programs? How do they do it in a cross-platform manner? How do you tie it to file i/o and stdin/stdout streams?
Thanks.
There's Boost.Locale, which is written in C++, wraps the ICU library, and provides a nice, non-alien interface to it.
For Unicode work, my first choice would be Boost.Locale, followed by ICU directly (if there is something that Boost.Locale doesn't wrap yet).
std::[w]string, contrary to popular belief, has no Unicode support whatsoever. They both operate only on [w]char[_t] units, in an encoding agnostic way.
If you only need basic Unicode support in the form of length and conversions and encoding verification, there is utfcpp, which provides a beautiful C++ interface for these operations.
Application frameworks like Qt and wxWdigets do provide their own string classes, which offer better Unicode support, but often tying you to use the whole framework throughout your code.
Aside from that, there is ICU, which is the standard Unicode implementation around today.
A work in progress by one of the C++ masters on this website is ogonek. you can surely contact the author through the Lounge<C++> StackOverflow chat room to ask for details on his progress.
This is how: http://www.utf8everywhere.org
Have you checked http://site.icu-project.org already?
ICU is currently the Unicode library. If you want cross-platform Unicode support, ICU is basically the only place to get it.
If only its interface wasn't more unfriendly than the wrong end of an automatic shotgun.
I've used wxWidgets to do this. It makes for easy conversion from std::string to their string type wxString. It's not ideal, but it works well, is simple and portable.
Related
Does Standard ML support Unicode?
I believe it does not but cannot find any authoritative documentation for SML stating such.
A yes or no is all that is needed, but you must know for a fact. No guessing or I believe answers. An authoritative link would be better.
Not really. All there is in the standard for the time being is the ability to use \uXXXX escapes in character and string literals, and that it does at least allow Unicode as the underlying character encoding for char or the optional WideChar.char. But the standard basis library does not prescribe any support for additional Unicode-aware functionality.
Particular implementations may have additional support, and you may perhaps find some third-party unicode libraries, but that's about it (unfortunately, I have no pointers at hand).
It depends a lot what you mean by "Unicode", which is a collection of many standards for many things. I've not seen any language or system that supports Unicode fully, and I don't even know what that would mean in all details.
You can certainly work with UTF-8 in SML: that encoding was invented to make it easy for ASCII applications to support Unicode. This might result it better and more efficient representation of Unicode than e.g. UTF-16 seen in Java, which does "support Unicode" officially, but then there are many practical problems with it (like surrogate characters).
With UTF-8 in SML strings, one question is how to work with string literals. Systems like Poly/ML allow to redefine the ML toplevel pretty printer for type string, and it is also feasible to wrap up the compiler to process string literals in a Unicode friendly way. Both of this is done in Isabelle/ML, which is based on Poly/ML. So if you take that big theorem proving environment as ML development platform, you have some kind of Unicode support built in (via so-called "Isabelle symbols").
I want to use unicode string in c++ with any library which implements a lot of its routine. I want to work with the boost libraries. And I found locale library. But I did not find that a lot of people use it, don't they? What can you say from your experience about this library? Are there any other boost libraries which implements unicode string routine?
UPDATE:
There is a problem with the use of another libraries in some my modules. I don't want to tie them to a lot of different libraries (boost is ok), but I need a unicode string routine (mb class). Why unicode? Mb in some characters of the strings will appear japanese symbols or from other language. And they must be treated as english characters.
Please excuse the self promotion here, but you might be interested in the answer I wrote here: What are the tradeoffs between boost::locale and std::locale?, comparing boost::locale to std::locale.
Depending on what you need to do with your text, boost::locale is probably the best approach to adding unicode support to your c++ code. This is especially true if you need cross platform support, or you want to use UTF-8 on Windows.
I'm targeting Windows but I don't see any reason why some API code I'm writing cannot use basic C++ types. What I want to do is expose methods that return strings and ints. In the C# world I'd just use string, and have a unicode string, but in VC++ I've got the option of using std::string, std::wstring, or MFC/ATL CStrings.
Should I just use std::wstring exclusively to support unicode, or can I use std::string which would be compiled to unicode based on my build settings? I'm leaning toward the latter. I'd prefer to provide Get[Item]AsCString() methods on my objects for other string types.
Also should I be using size_t instead of integer?
The API is going to be used by me, and perhaps a future developer working on the C++ GUI. It is a way to separate concerns. My preferences:
Intuitiveness for other developers.
Forward compatibility with VC++
Compatibility with other C++ compilers
Performance (this is a lesser concern for me, but need the startup time for rest of my app)
Any guides would be appreciated.
You should probably stick to the STL string type. The MFC CString class is built on top of that nowadays anyway.
As has been noted before, using wstring is not a magic bullet to address Unicode issues since there are many Unicode characters that still require multiple wchars to encode.
Using Utf-8 instead has potential benefits (you don't have to worry about endianness for example).
On Windows, all modern kernels are wchar based, so there is a (minimal) performance overhead involved if you use the 8bit char versions of APIs.
In your situation it would take me few hours / days to develop an opinion and decide. First of all, I very much prefer C_API to C++_API, even for C++ code. Then the answer would be char*, or wchar*, or TCHAR*. Now, try to guess if you REALLY expect the need for UNICODE. Great majority of my projects (including those with GUIs), had no need for UNICODE, the simplicity and familiarity of plain C-arrays is often hard to beat.
In short, try to predict what will be your needs, do not try to look too far into future (2 years is a good mark), then come up with the simplest solution to meet the needs.
Last: To answer your question more directly, I would start with std::string as my 1st choice to evaluate. Unless I would find some bid advantage in favor of the other choices, I would stay with it.
Using std::wstring/string instead of the MFC CString will allow you to port your code to other frameworks (e.g. Qt for Windows).
Even when using std::string you could encode the strings in UTF-8, so your API will still be able to return UNICODE strings.
Keep in mind that even wstring is really UTF-16 and not the full 32 bits UNICODE (while on some operating systems wstring is UTF-32).
I'm looking for a portable and easy-to-use string library for C/C++, which helps me to work with Unicode input/output. In the best case, it will store its strings in memory in UTF-8, and allow me to convert strings from ASCII to UTF-8/UTF-16 and back. I don't need much more besides that (ok, a liberal license won't hurt). I have seen that C++ comes with a <locale> header, but this seems to work on wchar_t only, which may or may not be UTF-16 encoded, plus I'm not sure how good this is actually.
Uses cases are for example: On Windows, the unicode APIs expect UTF-16 strings, and I need to convert ASCII or UTF-8 strings to pass it on to the API. Same goes for XML parsing, which may come with UTF-16, but I actually only want to process internally with UTF-8 (or, for that matter, if I switch internally to UTF-16, I'll need a conversion to that anyway).
So far, I've taken a look at the ICU, which is quite huge. Moreover, it wants to be built using it own project files, while I'd prefer a library for which there is either a CMake project or which is easy to build (something like compile all these .c files, link and good to go), instead of shipping something large as the ICU along my application.
Do you know such a library, which is also being maintained? After all, this seems to be a pretty basic problem.
UTF8-CPP seems to be exactly what you want.
I'd recommend that you look at the GNU iconv library.
There is another portable C library for string conversion between UTF-8, UTF-16, UTF-32, wchar - mdz_unicode library.
What is the best practice of Unicode processing in C++?
Use ICU for dealing with your data (or a similar library)
In your own data store, make sure everything is stored in the same encoding
Make sure you are always using your unicode library for mundane tasks like string length, capitalization status, etc. Never use standard library builtins like is_alpha unless that is the definition you want.
I can't say it enough: never iterate over the indices of a string if you care about correctness, always use your unicode library for this.
If you don't care about backwards compatibility with previous C++ standards, the current C++11 standard has built in Unicode support: http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2011/n3242.pdf
So the truly best practice for Unicode processing in C++ would be to use the built in facilities for it. That isn't always a possibility with older code bases though, with the standard being so new at present.
EDIT: To clarify, C++11 is Unicode aware in that it now has support for Unicode literals and Unicode strings. However, the standard library has only limited support for Unicode processing and conversion. For your current needs this may be enough. However, if you need to do a large amount of heavy lifting right now then you may still need to use something like ICU for more in-depth processing. There are some proposals currently in the works to include more robust support for text conversion between different encodings. My guess (and hope) is that this will be part of the next technical report.
Our company (and others) use the open source Internation Components for Unicode (ICU) library originally developed by Taligent.
It handles strings, locales, conversions, date/times, collation, transformations, et. al.
Start with the ICU Userguide
Here is a checklist for Windows programming:
All strings enclosed in _T("my string")
strlen() etc. functions replaced with _tcslen() etc.
Use LPTSTR and LPCTSTR instead of char * and const char *
When starting new projects in Dev Studio, religiously make sure the Unicode option is selected in your project properties.
For C++ strings, use std::wstring instead of std::string
Look at
Case insensitive string comparison in C++
That question has a link to the Microsoft documentation on Unicode: http://msdn.microsoft.com/en-us/library/cc194799.aspx
If you look on the left-hand navigation side on MSDN next to that article, you should find a lot of information pertaining to Unicode functions. It is part of a chapter on "Encoding Characters" (http://msdn.microsoft.com/en-us/library/cc194786.aspx)
It has the following subsections:
The Code-Page Model
Double-Byte Character Sets in Windows
Unicode
Compatibility Issues in Mixed Environments
Unicode Data Conversion
Migrating Windows-Based Programs to Unicode
Summary
Although this may not be best practice for everyone, you can write your own C++ UNICODE routines if you want!
I just finished doing it over a weekend. I learned a lot, though I don't guarantee it's 100% bug free, I did a lot of testing and it seems to work correctly.
My code is under the New BSD license and can be found here:
http://code.google.com/p/netwidecc/downloads/list
It is called WSUCONV and comes with a sample main() program that converts between UTF-8, UTF-16, and Standard ASCII. If you throw away the main code, you've got a nice library for reading / writing UNICODE.
As has been said above a library is the best bet when using a large system. However some times you do want to handle things your self (maybe because the library would use to many resources like on a micro controller). In this case you want a simple library that you can copy the parts out of for the things you actually need.
Willow Schlanger's example code seems like a good one (see his answer for more details).
I also found another one that has smaller code, but lacks full error checking and only handles UTF-8 but was simpler to take parts out of.
Here's a list of the embedded libraries that seem decent.
Embedded libraries
http://code.google.com/p/netwidecc/downloads/list (UTF8, UTF16LE, UTF16BE, UTF32)
http://www.cprogramming.com/tutorial/unicode.html (UTF8)
http://utfcpp.sourceforge.net/ (Simple UTF8 library)
Use IBM's International Components for Unicode
Have a look at the recommendations of UTF-8 Everywhere