How to convert ANSI byte to Unicode string? - c++

I have an vector<BYTE> that represents characters in a string. I want to interpret those characters as ASCII characters and store them in a Unicode (UTF-16) string. The current code assumes that the characters in the vector<BYTE> are Unicode rather than ASCII. This works fine for standard ASCII, but fails for extended ASCII characters. These characters need to be interpreted using the current code page retrieved via GetACP(). How would I go about creating a Unicode (UTF-16) string with these ASCII characters?
EDIT: I believe the solution should have something to do with the macros discussed here: http://msdn.microsoft.com/en-us/library/87zae4a3(v=vs.80).aspx I'm just not exactly sure how the actual implementation would go.
int ExtractByteArray(CATLString* pszResult, const CByteVector* pabData)
{
// place the data into the output cstring
pszResult->Empty();
for(int iIndex = 0; iIndex < pabData->GetSize(); iIndex++)
*pszResult += (TCHAR)pabData->GetAt(iIndex);
return RC_SUCCESS;
}

You should use MultibyteToWideChar to convert that string to unicode

Since you're using MFC, let CString do the job.

I have a vector<BYTE> that represents characters in a string. I want to interpret those characters as ASCII characters and store them in a Unicode (UTF-16) string
You should use std::vector<BYTE> only when you are working with binary data. While working with strings use std::string instead. Note that this std::string object will contain special characters that will be encoded by sequences of one or more bytes (thus called multi-byte characters), but these are not ASCII characters.
Once you use std::string, you can use MultiByteToWideChar to create own function that will convert a std::string (which contains multi-byte UTF-8 characters) into std::wstring containing UTF-16 encoded points:
// multi byte to wide char:
std::wstring s2ws(const std::string& str)
{
int size_needed = MultiByteToWideChar(CP_UTF8, 0, &str[0], (int)str.size(), NULL, 0);
std::wstring wstrTo(size_needed, 0);
MultiByteToWideChar(CP_UTF8, 0, &str[0], (int)str.size(), &wstrTo[0], size_needed);
return wstrTo;
}

Related

Converting to UTF-8 from ToUnicodeEx()

I get input using GetAsyncKeyState() which I then convert to unicode using ToUnicodeEx():
wchar_t character[1];
ToUnicodeEx(i, scanCode, keyboardState, character, 1, 0, layout);
I can write this to a file using wfstream like so:
wchar_t buffer[128]; // Will not print unicode without these 2 lines
file.rdbuf()->pubsetbuf(buffer, 128);
file.put(0xFEFF); // BOM needed since it's encoded using UCS-2 LE
file << character[0];
When I open this file in Notepad++ it's in UCS-2 LE, when I want it to be in UTF-8 format. I believe ToUnicodeEx() is returning it in UCS-2 LE format, it also only works with wide chars. Is there any way to do this using either fstream or wfstream by somehow converting into UTF-8 first? Thanks!
You might want to use the WideCharToMultiByte function.
For example:
wchar_t buffer[LEN]; // input buffer
char output_buffer[OUT_LEN]; // output buffer where the utf-8 string will be written
int num = WideCharToMultiByte(
CP_UTF8,
0,
buffer,
number_of_characters_in_buffer, // or -1 if buffer is null-terminated
output_buffer,
size_in_bytes_of_output_buffer,
NULL,
NULL);
Windows API generally refers to UTF-16 as unicode which is a little confusing. This means most unicode Win32 function calls operate on or give utf-16 strings.
So ToUnicodeEx returns a utf-16 string.
If you need this as utf 8 you'll need to convert it using WideCharToMultiByte
Thank you for all the help, I've managed to solve my problem with additional help from a blog post about WideCharToMultiByte() and UTF-8 here.
This function converts wide char arrays to a UTF-8 string:
// Takes in pointer to wide char array and length of the array
std::string ConvertCharacters(const wchar_t* buffer, int len)
{
int nChars = WideCharToMultiByte(CP_UTF8, 0, buffer, len, NULL, 0, NULL, NULL);
if (nChars == 0)
{
return u8"";
}
std::string newBuffer;
newBuffer.resize(nChars);
WideCharToMultiByte(CP_UTF8, 0, buffer, len, const_cast<char*>(newBuffer.c_str()), nChars, NULL, NULL);
return newBuffer;
}

Unicode to UTF8 Conversation

I am trying to convert Unicode string to UTF8 string :
#include <stdio.h>
#include <string>
#include <atlconv.h>
#include <atlstr.h>
using namespace std;
CStringA ConvertUnicodeToUTF8(const CStringW& uni)
{
if (uni.IsEmpty()) return "";
CStringA utf8;
int cc = 0;
if ((cc = WideCharToMultiByte(CP_UTF8, 0, uni, -1, NULL, 0, 0, 0) - 1) > 0)
{
char *buf = utf8.GetBuffer(cc);
if (buf) WideCharToMultiByte(CP_UTF8, 0, uni, -1, buf, cc, 0, 0);
utf8.ReleaseBuffer();
}
return utf8;
}
int main(void)
{
string u8str = ConvertUnicodeToUTF8(L"gökhan");
printf("%d\n", u8str.size());
return 0;
}
My question is : Should u8str.size() return value be 6? It prints 7 now!
7 is correct. The non ASCII character ö is encoded with two bytes.
By definition, "multi byte" means that each unicode entity may occupy up to 6 bytes, see here: How many bytes does one Unicode character take?
Further reading: http://www.joelonsoftware.com/articles/Unicode.html
A Unicode codepoint uses 2 or 4 bytes in UTF-16, but uses 1-4 bytes in UTF-8, depending on its value. It is possible for a 2-byte codepoint value in UTF-16 to use 3-4 bytes in UTF-8, thus a UTF-8 string may use more bytes than the corresponding UTF-16 string. UTF-8 tends to be more compact for Latin/Western languages, but UTF-16 tends to be more compact for Eastern Asian languages.
std::(w)string::size() and CStringT::GetLength() count the number of encoded codeunits, not the number of codepoints. In your example, "gökhan" is encoded as:
UTF-16LE: 0x0067 0x00f6 0x006b 0x0068 0x0061 0x006e
UTF-16BE: 0x6700 0xf600 0x6b00 0x6800 0x6100 0x6e00
UTF-8: 0x67 0xc3 0xb6 0x6b 0x68 0x61 0x6e
Notice that ö is encoded using 1 codeunit in UTF-16 (LE: 0x00f6, BE: 0xf600) but uses 2 codeunits in UTF-8 (0xc3 0xb6). That is why your UTF-8 string has a size of 7 instead of 6.
That being said, when calling WideCharToMultiByte() and MultiByteToWideChar() with -1 as the source length, the function has to manually count the characters, and the return value will include room for a null terminator when the destination pointer is NULL. You don't need that extra space when using CStringA/W, std::(w)string, etc, and you don't need the overhead of counting characters when the source already knows its length. You should always specify the actual source length when you know it, eg:
CStringA ConvertUnicodeToUTF8(const CStringW& uni)
{
CStringA utf8;
int cc = WideCharToMultiByte(CP_UTF8, 0, uni, uni.GetLength(), NULL, 0, 0, 0);
if (cc > 0)
{
char *buf = utf8.GetBuffer(cc);
if (buf)
{
cc = WideCharToMultiByte(CP_UTF8, 0, uni, uni.GetLength(), buf, cc, 0, 0);
utf8.ReleaseBuffer(cc);
}
}
return utf8;
}

convert MFC's CString to int for both ASCII and UNICODE

Converting CString to an int in ASCII mode is as simple as
CString s("123");
int n = atoi(s);
However that doesn't work for projects in UNICODE mode as CString becomes a wide-char string.
How do I write my code to cover both ASCII and UNICODE modes without extra if statements?
Turns out there's a _ttoi() available just for that purpose:
CString s( _T("123") );
int n = _ttoi(s);
This works for both modes with no extra effort.
If you need to convert hexadecimal (or other-base) numbers you can resort to a more generic strtol() variant:
CString s( _T("0xFA3") );
int n = _tcstol(s, nullptr, 16);
There's a special version of CString that uses multibyte characters even if your build is specified for wide characters - CStringA. It will also convert from wide characters automatically.
CString s(_T("123"));
CStringA sa = s;
int n = atoi(sa);
There's a corresponding CStringW that only uses wide characters.

Visual Studio multibyte chars to single bytes

Is there a simple way to convert multibyte UTF8 data (from Google Contacts API via https://www.google.com/m8/feeds/) to single bytes? I know the extended ASCII set is non-standard but, for example, my program which will display the info in an MFC CListBox is quite happy to show 'E acute' as 0xE9. I only need it to cope with a few similar European symbols. I've discovered I can convert everything with MultiByteToWideChar() but don't want to have to change lots of functions to accept wide characters if possible.
Thanks.
If you need to convert char * from UTF8 to ANSI, try the following function:
// change encoding from UTF8 to ANSI
char* change_encoding_from_UTF8_to_ANSI(char* szU8)
{
int wcsLen = ::MultiByteToWideChar(CP_UTF8, NULL, szU8, strlen(szU8), NULL, 0);
wchar_t* wszString = new wchar_t[wcsLen + 1];
::MultiByteToWideChar(CP_UTF8, NULL, szU8, strlen(szU8), wszString, wcsLen);
wszString[wcsLen] = '\0';
int ansiLen = ::WideCharToMultiByte(CP_ACP, NULL, wszString, wcslen(wszString), NULL, 0, NULL, NULL);
char* szAnsi = new char[ansiLen + 1];
::WideCharToMultiByte(CP_ACP, NULL, wszString, wcslen(wszString), szAnsi, ansiLen, NULL, NULL);
szAnsi[ansiLen] = '\0';
delete []wszString;
return szAnsi;
}
Utf8 has a 1-to-1 mapping with Ascii characters so if you are receiving Ascii characters as utf8 ones, AFAIK you can directly read them as Ascii. If you have non-Ascii chars then there's no way you can express them in Ascii (any byte > 0x80)

C++ string encoding UTF8 / unicode

I am trying to be able to send character "Т" (not a normal capital t, unicode decimal value 1058) from C++ to VB
However, with this method below Message is returned to VB and it appears as "Т", which is the above character encoded in ANSI.
#if defined(_MSC_VER) && _MSC_VER > 1310
# define utf8(str) ConvertToUTF8(L##str)
const char * ConvertToUTF8(const wchar_t * pStr) {
static char szBuf[1024];
WideCharToMultiByte(CP_UTF8, 0, pStr, -1, szBuf, sizeof(szBuf), NULL, NULL);
return szBuf;
}
#else
# define utf8(str) str
#endif
BSTR _stdcall chatTest()
{
BSTR Message;
CString temp("temp test");
temp+=utf8("\u0422");
int len = temp.GetLength();
Message = SysAllocStringByteLen ((LPCTSTR)temp, len+1 );
return Message;
}
If I just do temp+=("\u0422"); without the utf8 function. It sends the data as "?" and its actually a question mark (sometimes unicode characters show up as question marks in VB, but still have the correct unicode decimal value.. this is not the case here... it changes it to a question mark.
In VB if I output the String variable that has data from Message when it is "Т" to a text file it appears as the "Т".
So as far as I can tell its in UTF8 in C++, then somehow gets converted to ANSI in VB (or before its sent?), and then when outputted to a file its changed back to UTF8?
I just need to keep the "Т" intact when sending from C++ to VB. I know VB strings can hold that character because from another source within VB I am able to store it (it appears as a "?", but has the proper unicode decimal value).
Any help is greatly appreciated.
Thanks
A BSTR is not UTF-8, it's UTF-16 which is what you get with the L"" prefix. Take out the UTF-8 conversion and use CStringW. And use LPCWSTR instead of LPCTSTR.