Is it possible to convert UTF8 string in a std::string to std::wstring and vice versa in a platform independent manner? In a Windows application I would use MultiByteToWideChar and WideCharToMultiByte. However, the code is compiled for multiple OSes and I'm limited to standard C++ library.
I've asked this question 5 years ago. This thread was very helpful for me back then, I came to a conclusion, then I moved on with my project. It is funny that I needed something similar recently, totally unrelated to that project from the past. As I was researching for possible solutions, I stumbled upon my own question :)
The solution I chose now is based on C++11. The boost libraries that Constantin mentions in his answer are now part of the standard. If we replace std::wstring with the new string type std::u16string, then the conversions will look like this:
UTF-8 to UTF-16
std::string source;
...
std::wstring_convert<std::codecvt_utf8_utf16<char16_t>,char16_t> convert;
std::u16string dest = convert.from_bytes(source);
UTF-16 to UTF-8
std::u16string source;
...
std::wstring_convert<std::codecvt_utf8_utf16<char16_t>,char16_t> convert;
std::string dest = convert.to_bytes(source);
As seen from the other answers, there are multiple approaches to the problem. That's why I refrain from picking an accepted answer.
The problem definition explicitly states that the 8-bit character encoding is UTF-8. That makes this a trivial problem; all it requires is a little bit-twiddling to convert from one UTF spec to another.
Just look at the encodings on these Wikipedia pages for UTF-8, UTF-16, and UTF-32.
The principle is simple - go through the input and assemble a 32-bit Unicode code point according to one UTF spec, then emit the code point according to the other spec. The individual code points need no translation, as would be required with any other character encoding; that's what makes this a simple problem.
Here's a quick implementation of wchar_t to UTF-8 conversion and vice versa. It assumes that the input is already properly encoded - the old saying "Garbage in, garbage out" applies here. I believe that verifying the encoding is best done as a separate step.
std::string wchar_to_UTF8(const wchar_t * in)
{
std::string out;
unsigned int codepoint = 0;
for (in; *in != 0; ++in)
{
if (*in >= 0xd800 && *in <= 0xdbff)
codepoint = ((*in - 0xd800) << 10) + 0x10000;
else
{
if (*in >= 0xdc00 && *in <= 0xdfff)
codepoint |= *in - 0xdc00;
else
codepoint = *in;
if (codepoint <= 0x7f)
out.append(1, static_cast<char>(codepoint));
else if (codepoint <= 0x7ff)
{
out.append(1, static_cast<char>(0xc0 | ((codepoint >> 6) & 0x1f)));
out.append(1, static_cast<char>(0x80 | (codepoint & 0x3f)));
}
else if (codepoint <= 0xffff)
{
out.append(1, static_cast<char>(0xe0 | ((codepoint >> 12) & 0x0f)));
out.append(1, static_cast<char>(0x80 | ((codepoint >> 6) & 0x3f)));
out.append(1, static_cast<char>(0x80 | (codepoint & 0x3f)));
}
else
{
out.append(1, static_cast<char>(0xf0 | ((codepoint >> 18) & 0x07)));
out.append(1, static_cast<char>(0x80 | ((codepoint >> 12) & 0x3f)));
out.append(1, static_cast<char>(0x80 | ((codepoint >> 6) & 0x3f)));
out.append(1, static_cast<char>(0x80 | (codepoint & 0x3f)));
}
codepoint = 0;
}
}
return out;
}
The above code works for both UTF-16 and UTF-32 input, simply because the range d800 through dfff are invalid code points; they indicate that you're decoding UTF-16. If you know that wchar_t is 32 bits then you could remove some code to optimize the function.
std::wstring UTF8_to_wchar(const char * in)
{
std::wstring out;
unsigned int codepoint;
while (*in != 0)
{
unsigned char ch = static_cast<unsigned char>(*in);
if (ch <= 0x7f)
codepoint = ch;
else if (ch <= 0xbf)
codepoint = (codepoint << 6) | (ch & 0x3f);
else if (ch <= 0xdf)
codepoint = ch & 0x1f;
else if (ch <= 0xef)
codepoint = ch & 0x0f;
else
codepoint = ch & 0x07;
++in;
if (((*in & 0xc0) != 0x80) && (codepoint <= 0x10ffff))
{
if (sizeof(wchar_t) > 2)
out.append(1, static_cast<wchar_t>(codepoint));
else if (codepoint > 0xffff)
{
out.append(1, static_cast<wchar_t>(0xd800 + (codepoint >> 10)));
out.append(1, static_cast<wchar_t>(0xdc00 + (codepoint & 0x03ff)));
}
else if (codepoint < 0xd800 || codepoint >= 0xe000)
out.append(1, static_cast<wchar_t>(codepoint));
}
}
return out;
}
Again if you know that wchar_t is 32 bits you could remove some code from this function, but in this case it shouldn't make any difference. The expression sizeof(wchar_t) > 2 is known at compile time, so any decent compiler will recognize dead code and remove it.
UTF8-CPP: UTF-8 with C++ in a Portable Way
You can extract utf8_codecvt_facet from Boost serialization library.
Their usage example:
typedef wchar_t ucs4_t;
std::locale old_locale;
std::locale utf8_locale(old_locale,new utf8_codecvt_facet<ucs4_t>);
// Set a New global locale
std::locale::global(utf8_locale);
// Send the UCS-4 data out, converting to UTF-8
{
std::wofstream ofs("data.ucd");
ofs.imbue(utf8_locale);
std::copy(ucs4_data.begin(),ucs4_data.end(),
std::ostream_iterator<ucs4_t,ucs4_t>(ofs));
}
// Read the UTF-8 data back in, converting to UCS-4 on the way in
std::vector<ucs4_t> from_file;
{
std::wifstream ifs("data.ucd");
ifs.imbue(utf8_locale);
ucs4_t item = 0;
while (ifs >> item) from_file.push_back(item);
}
Look for utf8_codecvt_facet.hpp and utf8_codecvt_facet.cpp files in boost sources.
There are several ways to do this, but the results depend on what the character encodings are in the string and wstring variables.
If you know the string is ASCII, you can simply use wstring's iterator constructor:
string s = "This is surely ASCII.";
wstring w(s.begin(), s.end());
If your string has some other encoding, however, you'll get very bad results. If the encoding is Unicode, you could take a look at the ICU project, which provides a cross-platform set of libraries that convert to and from all sorts of Unicode encodings.
If your string contains characters in a code page, then may $DEITY have mercy on your soul.
You can use the codecvt locale facet. There's a specific specialisation defined, codecvt<wchar_t, char, mbstate_t> that may be of use to you, although, the behaviour of that is system-specific, and does not guarantee conversion to UTF-8 in any way.
Created my own library for utf-8 to utf-16/utf-32 conversion - but decided to make a fork of existing project for that purpose.
https://github.com/tapika/cutf
(Originated from https://github.com/noct/cutf )
API works with plain C as well as with C++.
Function prototypes looks like this: (For full list see https://github.com/tapika/cutf/blob/master/cutf.h )
//
// Converts utf-8 string to wide version.
//
// returns target string length.
//
size_t utf8towchar(const char* s, size_t inSize, wchar_t* out, size_t bufSize);
//
// Converts wide string to utf-8 string.
//
// returns filled buffer length (not string length)
//
size_t wchartoutf8(const wchar_t* s, size_t inSize, char* out, size_t outsize);
#ifdef __cplusplus
std::wstring utf8towide(const char* s);
std::wstring utf8towide(const std::string& s);
std::string widetoutf8(const wchar_t* ws);
std::string widetoutf8(const std::wstring& ws);
#endif
Sample usage / simple test application for utf conversion testing:
#include "cutf.h"
#define ok(statement) \
if( !(statement) ) \
{ \
printf("Failed statement: %s\n", #statement); \
r = 1; \
}
int simpleStringTest()
{
const wchar_t* chineseText = L"主体";
auto s = widetoutf8(chineseText);
size_t r = 0;
printf("simple string test: ");
ok( s.length() == 6 );
uint8_t utf8_array[] = { 0xE4, 0xB8, 0xBB, 0xE4, 0xBD, 0x93 };
for(int i = 0; i < 6; i++)
ok(((uint8_t)s[i]) == utf8_array[i]);
auto ws = utf8towide(s);
ok(ws.length() == 2);
ok(ws == chineseText);
if( r == 0 )
printf("ok.\n");
return (int)r;
}
And if this library does not satisfy your needs - feel free to open following link:
http://utf8everywhere.org/
and scroll down at the end of page and pick up any heavier library which you like.
I don't think there's a portable way of doing this. C++ doesn't know the encoding of its multibyte characters.
As Chris suggested, your best bet is to play with codecvt.
Related
I have this bit of code below that I've written that uses utfcpp to convert from a utf16 encoded file to a utf8 string.
I think I must be using it improperly, because the result isnt changing. The utf8content variable comes out with null characters (\0) every other character exactly like the uft16 that I put into it.
//get file content
string utf8content;
std::ifstream ifs(path);
vector<unsigned short> utf16line((std::istreambuf_iterator<char>(ifs)), std::istreambuf_iterator<char>());
//convert
if(!utf8::is_valid(utf16line.begin(), utf16line.end())){
utf8::utf16to8(utf16line.begin(), utf16line.end(), back_inserter(utf8content));
}
I found the location in the library that is doing the append, it treats everything in the first octet the same, whereas my thought is that it should handle 0's differently.
From checked.h here is the append method (line 106). This is called by utf16to8 (line 202). Notice that I added first part of the if, so that it skips the null chars in an attempt to fix the problem.
template <typename octet_iterator>
octet_iterator append(uint32_t cp, octet_iterator result)
{
if (!utf8::internal::is_code_point_valid(cp))
throw invalid_code_point(cp);
if(cp < 0x01) //<===I added this line and..
*(result++); //<===I added this line
else if (cp < 0x80) // one octet
*(result++) = static_cast<uint8_t>(cp);
else if (cp < 0x800) { // two octets
*(result++) = static_cast<uint8_t>((cp >> 6) | 0xc0);
*(result++) = static_cast<uint8_t>((cp & 0x3f) | 0x80);
}
else if (cp < 0x10000) { // three octets
*(result++) = static_cast<uint8_t>((cp >> 12) | 0xe0);
*(result++) = static_cast<uint8_t>(((cp >> 6) & 0x3f) | 0x80);
*(result++) = static_cast<uint8_t>((cp & 0x3f) | 0x80);
}
else { // four octets
*(result++) = static_cast<uint8_t>((cp >> 18) | 0xf0);
*(result++) = static_cast<uint8_t>(((cp >> 12) & 0x3f) | 0x80);
*(result++) = static_cast<uint8_t>(((cp >> 6) & 0x3f) | 0x80);
*(result++) = static_cast<uint8_t>((cp & 0x3f) | 0x80);
}
return result;
}
I cant imagine that this is the solution however, simply removing the null chars from the string and why wouldnt the library have found this? So clearly I'm doing something wrong.
So, my question is, what is wrong with the way that I'm implementing my utfcpp in the first bit of code? Is there some type conversion that I've done wrong?
My content is a UTF16 encoded xml file. It seems to truncate the results at the first null character.
std::ifstream reads the file in 8bit char units. UTF-16 uses 16bit units instead. So if you want to read the file and fill your vector with proper UTF-16 units, then use std::wifstream instead (or std::basic_ifstream<char16_t> or equivalent if wchar_t is not 16-bit on your platform).
And do no call utf8::is_valid() here. It expects UTF-8 input but you have UTF-16 input instead.
If sizeof(wchar_t) is 2:
std::wifstream ifs(path);
std::istreambuf_iterator<wchar_t> ifs_begin(ifs), ifs_end;
std::wstring utf16content(ifs_begin, ifs_end);
std::string utf8content;
try {
utf8::utf16to8(utf16content.begin(), utf16content.end(), std::back_inserter(utf8content));
}
catch (const utf8::invalid_utf16 &) {
// bad UTF-16 data!
}
Otherwise:
// if char16_t is not available, use unit16_t or unsigned short instead
std::basic_ifstream<char16_t> ifs(path);
std::istreambuf_iterator<char16_t> ifs_begin(ifs), ifs_end;
std::basic_string<char16_t> utf16content(ifs_begin, ifs_end);
std::string utf8content;
try {
utf8::utf16to8(utf16content.begin(), utf16content.end(), std::back_inserter(utf8content));
}
catch (const utf8::invalid_utf16 &) {
// bad UTF-16 data!
}
The problem is where you're reading the file:
vector<unsigned short> utf16line((std::istreambuf_iterator<char>(ifs)), std::istreambuf_iterator<char>());
This line is taking a char iterator and using it to fill a vector one byte at a time. You're essentially casting each byte instead of reading two bytes at a time.
This is breaking each UTF-16 entity into two pieces, and for much of your input one of those two pieces will be a null byte.
My company use some code like this:
std::string(CT2CA(some_CString)).c_str()
which I believe it converts a Unicode string (whose type is CString)into ANSI encoding, and this string is for a email's subject. However, header of the email (which includes the subject) indicates that the mail client should decode it as a unicode (this is how the original code does). Thus, some German chars like "ä ö ü" will not be properly displayed as the title.
Is there anyway that I can put this header back to UTF8 and store into a std::string or const char*?
I know there are a lot of smarter ways to do this, but I need to keep the code sticking to its original one (i.e. sent the header as std::string or const char*).
Thanks in advance.
Becareful : it's '|' and not '&' !
*buffer++ = 0xC0 | (c >> 6);
*buffer++ = 0x80 | (c & 0x3F);
This sounds like a plain conversion from one encoding to another encoding: You can use std::codecvt<char, char, mbstate_t> for this. Whether your implementation ships with a suitable conversion, I don't know, however. From the sounds of it you just try to convert ISO-Latin-1 into Unicode. That should be pretty much trivial: the first 128 characters map (0 to 127) identically to UTF-8 and the second half conveniently map to the corresponding Unicode code points, i.e., you just need to encode the corresponding value into UTF-8. Each character will be replaced by two characters. That it, I think the conversion is something like that:
// Takes the next position and the end of a buffer as first two arguments and the
// character to convert from ISO-Latin-1 as third argument.
// Returns a pointer to end of the produced sequence.
char* iso_latin_1_to_utf8(char* buffer, char* end, unsigned char c) {
if (c < 128) {
if (buffer == end) { throw std::runtime_error("out of space"); }
*buffer++ = c;
}
else {
if (end - buffer < 2) { throw std::runtime_error("out of space"); }
*buffer++ = 0xC0 | (c >> 6);
*buffer++ = 0x80 | (c & 0x3f);
}
return buffer;
}
Does the C++ Standard Template Library (STL) provide any method to convert a UTF8 encoded byte buffer into a wstring?
For example:
const unsigned char* szBuf = (const unsigned char*) "d\xC3\xA9j\xC3\xA0 vu";
std::wstring str = method(szBuf); // Should assign "déjà vu" to str
I want to avoid having to implement my own UTF8 conversion code, like this:
const unsigned char* pch = szBuf;
while (*pch != 0)
{
if ((*pch & 0x80) == 0)
{
str += *pch++;
}
else if ((*pch & 0xE0) == 0xC0 && (pch[1] & 0xC0) == 0x80)
{
wchar_t ch = (((*pch & 0x1F) >> 2) << 8) +
((*pch & 0x03) << 6) +
(pch[1] & 0x3F);
str += ch;
pch += 2;
}
else if (...)
{
// other cases omitted
}
}
EDIT: Thanks for your comments and the answer. This code fragment performs the desired conversion:
std::wstring_convert<std::codecvt_utf8<wchar_t>,wchar_t> convert;
str = convert.from_bytes((const char*)szBuf);
In C++11 you can use std::codecvt_utf8. If you don't have that, you may be able to persuade iconv to do what you want; unfortunately, that's not ubiquitous either, not all implementations that have it support UTF-8, and I'm not aware of any way to find out the appropriate thing to pass to iconv_open to do a conversion from wchar_t.
If you don't have either of those things, your best bet is a third-party library such as ICU. Surprisingly, Boost does not appear to have anything to the purpose, although I coulda missed it.
I want to read Unicode file (UTF-8) character by character, but I don't know how to read from a file one by one character.
Can anyone to tell me how to do that?
First, look at how UTF-8 encodes characters: http://en.wikipedia.org/wiki/UTF-8#Description
Each Unicode character is encoded to one or more UTF-8 byte. After you read first next byte in the file, according to that table:
(Row 1) If the most significant bit is 0 (char & 0x80 == 0) you have your character.
(Row 2) If the three most significant bits are 110 (char & 0xE0 == 0xc0), you have to read another byte, and the bits 4,3,2 of the first UTF-8 byte (110YYYyy) make the first byte of the Unicode character (00000YYY) and the two least significant bits with 6 least significant bits of the next byte (10xxxxxx) make the second byte of the Unicode character (yyxxxxxx); You can do the bit arithmetic using shifts and logical operators of C/C++ easily:
UnicodeByte1 = (UTF8Byte1 << 3) & 0xE0;
UnicodeByte2 = ( (UTF8Byte1 << 6) & 0xC0 ) | (UTF8Byte2 & 0x3F);
And so on...
Sounds a bit complicated, but it's not difficult if you know how to modify the bits to put them in proper place to decode a UTF-8 string.
UTF-8 is ASCII compatible, so you can read a UTF-8 file like you would an ASCII file. The C++ way to read a whole file into a string is:
#include <iostream>
#include <string>
#include <fstream>
std::ifstream fs("my_file.txt");
std::string content((std::istreambuf_iterator<char>(fs)), std::istreambuf_iterator<char>());
The resultant string has characters corresponding to UTF-8 bytes. you could loop through it like so:
for (std::string::iterator i = content.begin(); i != content.end(); ++i) {
char nextChar = *i;
// do stuff here.
}
Alternatively, you could open the file in binary mode, and then move through each byte that way:
std::ifstream fs("my_file.txt", std::ifstream::binary);
if (fs.is_open()) {
char nextChar;
while (fs.good()) {
fs >> nextChar;
// do stuff here.
}
}
If you want to do more complicated things, I suggest you take a peek at Qt. I've found it rather useful for this sort of stuff. At least, less painful than ICU, for doing largely practical things.
QFile file;
if (file.open("my_file.text") {
QTextStream in(&file);
in.setCodec("UTF-8")
QString contents = in.readAll();
return;
}
In theory strlib.h has a function mblen which shell return length of multibyte symbol. But in my case it returns -1 for first byte of multibyte symbol and continue it returns all time. So I write the following:
{
if(i_ch == nullptr) return -1;
int l = 0;
char ch = *i_ch;
int mask = 0x80;
while(ch & mask) {
l++;
mask = (mask >> 1);
}
if (l < 4) return -1;
return l;
}
It's take less time than research how shell using mblen.
try this: get the file and then loop through the text based on it's length
Pseudocode:
String s = file.toString();
int len = s.length();
for(int i=0; i < len; i++)
{
String the_character = s[i].
// TODO : Do your thing :o)
}
I know that to get a unicode character in C++ I can do:
std::wstring str = L"\u4FF0";
However, what if I want to get all the characters in the range 4FF0 to 5FF0? Is it possible to dynamically build a unicode character? What I have in mind is something like this pseudo-code:
for (int i = 20464; i < 24560; i++ { // From 4FF0 to 5FF0
std::wstring str = L"\u" + hexa(i); // build the unicode character
// do something with str
}
How would I do that in C++?
The wchar_t type held within a wstring is an integer type, so you can use it directly:
for (wchar_t c = 0x4ff0; c <= 0x5ff0; ++c) {
std::wstring str(1, c);
// do something with str
}
Be careful trying to do this with characters above 0xffff, since depending on the platform (e.g. Windows) they will not fit into a wchar_t.
If for example you wanted to see the Emoticon block in a string, you can create surrogate pairs:
std::wstring str;
for (int c = 0x1f600; c <= 0x1f64f; ++c) {
if (c <= 0xffff || sizeof(wchar_t) > 2)
str.append(1, (wchar_t)c);
else {
str.append(1, (wchar_t)(0xd800 | ((c - 0x10000) >> 10)));
str.append(1, (wchar_t)(0xdc00 | ((c - 0x10000) & 0x3ff)));
}
}
You cannot increment over Unicode characters as if it is an array, some characters are build up out of multiple 'char's (UTF-8) and multiple 'WCHAR's (UTF-16) that's because of the diacritics etc. If you're really serious about this stuff you should use an API like UniScribe or ICU.
Some resources to read:
http://en.wikipedia.org/wiki/UTF-16/UCS-2
http://en.wikipedia.org/wiki/Precomposed_character
http://en.wikipedia.org/wiki/Combining_character
http://scripts.sil.org/cms/scripts/page.php?item_id=UnicodeNames#4d2aa980
http://en.wikipedia.org/wiki/Unicode_equivalence
http://msdn.microsoft.com/en-us/library/dd374126.aspx
What about:
for (std::wstring::value_type i(0x4ff0); i <= 0x5ff0; ++i)
{
std::wstring str(1, i);
}
Note that the code has not been tested, so it may not compile as-is.
Also, given the platform you are working on a wstring's character unit may be 2, 4, or N bytes wide- so be intentional about how you use it.