How to convert a TCHAR array to std::string? - c++

How do I convert a TCHAR array to std::string (not to std::basic_string)?

TCHAR is just a typedef that, depending on your compilation configuration, either defaults to char or wchar_t.
Standard Template Library supports both ASCII (with std::string) and wide character sets (with std::wstring). All you need to do is to typedef String as either std::string or std::wstring depending on your compilation configuration. To maintain flexibility you can use the following code:
#ifndef UNICODE
typedef std::string String;
#else
typedef std::wstring String;
#endif
Now you may use String in your code and let the compiler handle the nasty parts. String will now have constructors that lets you convert TCHAR to std::string or std::wstring.

My answer is late, I'll admit that, but with the answers of 'Alok Save' and some research I've found a good way! (Note: I didn't test this version a lot, so it might not work in every case, but from what I tested it should):
TCHAR t = SomeFunctionReturningTCHAR();
std::string str;
#ifndef UNICODE
str = t;
#else
std::wstring wStr = t;
str = std::string(wStr.begin(), wStr.end());
#endif
std::cout << str << std::endl; //<-- should work!

TCHAR is either char or wchar_t, so a
typedef basic_string<TCHAR> tstring;
is one way of doing it.
The other is to skip char altogether and just use std::wstring.

TCHAR type is char or wchar_t, depending on your project settings.
#ifdef UNICODE
// TCHAR type is wchar_t
#else
// TCHAR type is char
#endif
So if you must use std::string instead of std::wstring, you should use a converter function. I may use wcstombs or WideCharToMultiByte.
TCHAR * text;
#ifdef UNICODE
/*/
// Simple C
const size_t size = ( wcslen(text) + 1 ) * sizeof(wchar_t);
wcstombs(&buffer[0], text, size);
std::vector<char> buffer(size);
/*/
// Windows API (I would use this)
std::vector<char> buffer;
int size = WideCharToMultiByte(CP_UTF8, 0, text, -1, NULL, 0, NULL, NULL);
if (size > 0) {
buffer.resize(size);
WideCharToMultiByte(CP_UTF8, 0, text, -1, static_cast<BYTE*>(&buffer[0]), buffer.size(), NULL, NULL);
}
else {
// Error handling
}
//*/
std::string string(&buffer[0]);
#else
std::string string(text);
#endif

Quick and dirty solution :
TCHAR str[256] = {};
// put something in str...
// convert to string
std::string strtmp(&str[0], &str[255]);
std::cout << strtmp << std::endl;

Simple!
std::string tcharToChar(TCHAR* buffer)
{
char *charBuffer = NULL;
std::string returnValue;
int lengthOfbuffer = lstrlenW(buffer);
if(buffer!=NULL)
{
charBuffer = (char*)calloc(lengthOfbuffer+1,sizeof(char));
}
else
{
return NULL;
}
for (int index = 0;
index < lengthOfbuffer;
index++)
{
char *singleCharacter = (char*)calloc(2,sizeof(char));
singleCharacter[0] = (char)buffer[index];
singleCharacter[1] = '\0';
strcat(charBuffer, singleCharacter);
free(singleCharacter );
}
strcat(charBuffer, "\0");
returnValue.append(charBuffer);
free(charBuffer);
return returnValue;
}

Related

how can I convert wstring to u16string?

I want to convert wstring to u16string in C++.
I can convert wstring to string, or reverse. But I don't know how convert to u16string.
u16string CTextConverter::convertWstring2U16(wstring str)
{
int iSize;
u16string szDest[256] = {};
memset(szDest, 0, 256);
iSize = WideCharToMultiByte(CP_UTF8, NULL, str.c_str(), -1, NULL, 0,0,0);
WideCharToMultiByte(CP_UTF8, NULL, str.c_str(), -1, szDest, iSize,0,0);
u16string s16 = szDest;
return s16;
}
Error in WideCharToMultiByte(CP_UTF8, NULL, str.c_str(), -1, szDest, iSize,0,0);'s szDest. Cause of u16string can't use with LPSTR.
How can I fix this code?
For a platform-independent solution see this answer.
If you need a solution only for the Windows platform, the following code will be sufficient:
std::wstring wstr( L"foo" );
std::u16string u16str( wstr.begin(), wstr.end() );
On the Windows platform, a std::wstring is interchangeable with std::u16string because sizeof(wstring::value_type) == sizeof(u16string::value_type) and both are UTF-16 (little endian) encoded.
wstring::value_type = wchar_t
u16string::value_type = char16_t
The only difference is that wchar_t is signed, whereas char16_t is unsigned. So you only have to do sign conversion, which can be performed by using the u16string constructor that takes an iterator pair as arguments. This constructor will implicitly convert wchar_t to char16_t.
Full example console application:
#include <windows.h>
#include <string>
int main()
{
static_assert( sizeof(std::wstring::value_type) == sizeof(std::u16string::value_type),
"std::wstring and std::u16string are expected to have the same character size" );
std::wstring wstr( L"foo" );
std::u16string u16str( wstr.begin(), wstr.end() );
// The u16string constructor performs an implicit conversion like:
wchar_t wch = L'A';
char16_t ch16 = wch;
// Need to reinterpret_cast because char16_t const* is not implicitly convertible
// to LPCWSTR (aka wchar_t const*).
::MessageBoxW( 0, reinterpret_cast<LPCWSTR>( u16str.c_str() ), L"test", 0 );
return 0;
}
Update
I had thought the standard version did not work, but in fact this was simply due to bugs in the Visual C++ and libstdc++ 3.4.21 runtime libraries. It does work with clang++ -std=c++14 -stdlib=libc++. Here is a version that tests whether the standard method works on your compiler:
#include <codecvt>
#include <cstdlib>
#include <cstring>
#include <cwctype>
#include <iostream>
#include <locale>
#include <clocale>
#include <vector>
using std::cout;
using std::endl;
using std::exit;
using std::memcmp;
using std::size_t;
using std::wcout;
#if _WIN32 || _WIN64
// Windows needs a little non-standard magic for this to work.
#include <io.h>
#include <fcntl.h>
#include <locale.h>
#endif
using std::size_t;
void init_locale(void)
// Does magic so that wcout can work.
{
#if _WIN32 || _WIN64
// Windows needs a little non-standard magic.
constexpr char cp_utf16le[] = ".1200";
setlocale( LC_ALL, cp_utf16le );
_setmode( _fileno(stdout), _O_U16TEXT );
#else
// The correct locale name may vary by OS, e.g., "en_US.utf8".
constexpr char locale_name[] = "";
std::locale::global(std::locale(locale_name));
std::wcout.imbue(std::locale());
#endif
}
int main(void)
{
constexpr char16_t msg_utf16[] = u"¡Hola, mundo! \U0001F600"; // Shouldn't assume endianness.
constexpr wchar_t msg_w[] = L"¡Hola, mundo! \U0001F600";
constexpr char32_t msg_utf32[] = U"¡Hola, mundo! \U0001F600";
constexpr char msg_utf8[] = u8"¡Hola, mundo! \U0001F600";
init_locale();
const std::codecvt_utf16<wchar_t, 0x1FFFF, std::little_endian> converter_w;
const size_t max_len = sizeof(msg_utf16);
std::vector<char> out(max_len);
std::mbstate_t state;
const wchar_t* from_w = nullptr;
char* to_next = nullptr;
converter_w.out( state, msg_w, msg_w+sizeof(msg_w)/sizeof(wchar_t), from_w, out.data(), out.data() + out.size(), to_next );
if (memcmp( msg_utf8, out.data(), sizeof(msg_utf8) ) == 0 ) {
wcout << L"std::codecvt_utf16<wchar_t> converts to UTF-8, not UTF-16!" << endl;
} else if ( memcmp( msg_utf16, out.data(), max_len ) != 0 ) {
wcout << L"std::codecvt_utf16<wchar_t> conversion not equal!" << endl;
} else {
wcout << L"std::codecvt_utf16<wchar_t> conversion is correct." << endl;
}
out.clear();
out.resize(max_len);
const std::codecvt_utf16<char32_t, 0x1FFFF, std::little_endian> converter_u32;
const char32_t* from_u32 = nullptr;
converter_u32.out( state, msg_utf32, msg_utf32+sizeof(msg_utf32)/sizeof(char32_t), from_u32, out.data(), out.data() + out.size(), to_next );
if ( memcmp( msg_utf16, out.data(), max_len ) != 0 ) {
wcout << L"std::codecvt_utf16<char32_t> conversion not equal!" << endl;
} else {
wcout << L"std::codecvt_utf16<char32_t> conversion is correct." << endl;
}
wcout << msg_w << endl;
return EXIT_SUCCESS;
}
Previous
A bit late to the game, but here’s a version that additionally checks whether wchar_t is 32-bits (as it is on Linux), and if so, performs surrogate-pair conversion. I recommend saving this source as UTF-8 with a BOM. Here is a link to it on ideone.
#include <cassert>
#include <cwctype>
#include <cstdlib>
#include <iomanip>
#include <iostream>
#include <locale>
#include <string>
#if _WIN32 || _WIN64
// Windows needs a little non-standard magic for this to work.
#include <io.h>
#include <fcntl.h>
#include <locale.h>
#endif
using std::size_t;
void init_locale(void)
// Does magic so that wcout can work.
{
#if _WIN32 || _WIN64
// Windows needs a little non-standard magic.
constexpr char cp_utf16le[] = ".1200";
setlocale( LC_ALL, cp_utf16le );
_setmode( _fileno(stdout), _O_U16TEXT );
#else
// The correct locale name may vary by OS, e.g., "en_US.utf8".
constexpr char locale_name[] = "";
std::locale::global(std::locale(locale_name));
std::wcout.imbue(std::locale());
#endif
}
std::u16string make_u16string( const std::wstring& ws )
/* Creates a UTF-16 string from a wide-character string. Any wide characters
* outside the allowed range of UTF-16 are mapped to the sentinel value U+FFFD,
* per the Unicode documentation. (http://www.unicode.org/faq/private_use.html
* retrieved 12 March 2017.) Unpaired surrogates in ws are also converted to
* sentinel values. Noncharacters, however, are left intact. As a fallback,
* if wide characters are the same size as char16_t, this does a more trivial
* construction using that implicit conversion.
*/
{
/* We assume that, if this test passes, a wide-character string is already
* UTF-16, or at least converts to it implicitly without needing surrogate
* pairs.
*/
if ( sizeof(wchar_t) == sizeof(char16_t) ) {
return std::u16string( ws.begin(), ws.end() );
} else {
/* The conversion from UTF-32 to UTF-16 might possibly require surrogates.
* A surrogate pair suffices to represent all wide characters, because all
* characters outside the range will be mapped to the sentinel value
* U+FFFD. Add one character for the terminating NUL.
*/
const size_t max_len = 2 * ws.length() + 1;
// Our temporary UTF-16 string.
std::u16string result;
result.reserve(max_len);
for ( const wchar_t& wc : ws ) {
const std::wint_t chr = wc;
if ( chr < 0 || chr > 0x10FFFF || (chr >= 0xD800 && chr <= 0xDFFF) ) {
// Invalid code point. Replace with sentinel, per Unicode standard:
constexpr char16_t sentinel = u'\uFFFD';
result.push_back(sentinel);
} else if ( chr < 0x10000UL ) { // In the BMP.
result.push_back(static_cast<char16_t>(wc));
} else {
const char16_t leading = static_cast<char16_t>(
((chr-0x10000UL) / 0x400U) + 0xD800U );
const char16_t trailing = static_cast<char16_t>(
((chr-0x10000UL) % 0x400U) + 0xDC00U );
result.append({leading, trailing});
} // end if
} // end for
/* The returned string is shrunken to fit, which might not be the Right
* Thing if there is more to be added to the string.
*/
result.shrink_to_fit();
// We depend here on the compiler to optimize the move constructor.
return result;
} // end if
// Not reached.
}
int main(void)
{
static const std::wstring wtest(L"☪☮∈✡℩☯✝ \U0001F644");
static const std::u16string u16test(u"☪☮∈✡℩☯✝ \U0001F644");
const std::u16string converted = make_u16string(wtest);
init_locale();
std::wcout << L"sizeof(wchar_t) == " << sizeof(wchar_t) << L".\n";
for( size_t i = 0; i <= u16test.length(); ++i ) {
if ( u16test[i] != converted[i] ) {
std::wcout << std::hex << std::showbase
<< std::right << std::setfill(L'0')
<< std::setw(4) << (unsigned)converted[i] << L" ≠ "
<< std::setw(4) << (unsigned)u16test[i] << L" at "
<< i << L'.' << std::endl;
return EXIT_FAILURE;
} // end if
} // end for
std::wcout << wtest << std::endl;
return EXIT_SUCCESS;
}
Footnote
Since someone asked: The reason I suggest UTF-8 with BOM is that some compilers, including MSVC 2015, will assume a source file is encoded according to the current code page unless there is a BOM or you specify an encoding on the command line. No encoding works on all toolchains, unfortunately, but every tool I’ve used that’s modern enough to support C++14 also understands the BOM.
- To convert CString to std:wstring and string
string CString2string(CString str)
{
int bufLen = WideCharToMultiByte(CP_UTF8, 0, (LPCTSTR)str, -1, NULL, 0, NULL,NULL);
char *buf = new char[bufLen];
WideCharToMultiByte(CP_UTF8, 0, (LPCTSTR)str, -1, buf, bufLen, NULL, NULL);
string sRet(buf);
delete[] buf;
return sRet;
}
CString strFileName = "test.txt";
wstring wFileName(strFileName.GetBuffer());
strFileName.ReleaseBuffer();
string sFileName = CString2string(strFileName);
- To convert string to CString
CString string2CString(string s)
{
int bufLen = MultiByteToWideChar(CP_ACP, 0, s.c_str(), -1, NULL, 0);
WCHAR *buf = new WCHAR[bufLen];
MultiByteToWideChar(CP_ACP, 0, s.c_str(), -1, buf, bufLen);
CString strRet(buf);
delete[] buf;
return strRet;
}
string sFileName = "test.txt";
CString strFileName = string2CString(sFileName);

How do I convert a string to a wstring using the value of the string?

I'm new to C++ and I have this issue. I have a string called DATA_DIR that I need for format into a wstring.
string str = DATA_DIR;
std::wstring temp(L"%s",str);
Visual Studio tells me that there is no instance of constructor that matches with the argument list. Clearly, I'm doing something wrong.
I found this example online
std::wstring someText( L"hello world!" );
which apparently works (no compile errors). My question is, how do I get the string value stored in DATA_DIR into the wstring constructor as opposed to something arbitrary like "hello world"?
Here is an implementation using wcstombs (Updated):
#include <iostream>
#include <cstdlib>
#include <string>
std::string wstring_from_bytes(std::wstring const& wstr)
{
std::size_t size = sizeof(wstr.c_str());
char *str = new char[size];
std::string temp;
std::wcstombs(str, wstr.c_str(), size);
temp = str;
delete[] str;
return temp;
}
int main()
{
std::wstring wstr = L"abcd";
std::string str = wstring_from_bytes(wstr);
}
Here is a demo.
This is in reference to the most up-voted answer but I don't have enough "reputation" to just comment directly on the answer.
The name of the function in the solution "wstring_from_bytes" implies it is doing what the original poster wants, which is to get a wstring given a string, but the function is actually doing the opposite of what the original poster asked for and would more accurately be named "bytes_from_wstring".
To convert from string to wstring, the wstring_from_bytes function should use mbstowcs not wcstombs
#define _CRT_SECURE_NO_WARNINGS
#include <iostream>
#include <cstdlib>
#include <string>
std::wstring wstring_from_bytes(std::string const& str)
{
size_t requiredSize = 0;
std::wstring answer;
wchar_t *pWTempString = NULL;
/*
* Call the conversion function without the output buffer to get the required size
* - Add one to leave room for the NULL terminator
*/
requiredSize = mbstowcs(NULL, str.c_str(), 0) + 1;
/* Allocate the output string (Add one to leave room for the NULL terminator) */
pWTempString = (wchar_t *)malloc( requiredSize * sizeof( wchar_t ));
if (pWTempString == NULL)
{
printf("Memory allocation failure.\n");
}
else
{
// Call the conversion function with the output buffer
size_t size = mbstowcs( pWTempString, str.c_str(), requiredSize);
if (size == (size_t) (-1))
{
printf("Couldn't convert string\n");
}
else
{
answer = pWTempString;
}
}
if (pWTempString != NULL)
{
delete[] pWTempString;
}
return answer;
}
int main()
{
std::string str = "abcd";
std::wstring wstr = wstring_from_bytes(str);
}
Regardless, this is much more easily done in newer versions of the standard library (C++ 11 and newer)
#include <locale>
#include <codecvt>
#include <string>
std::wstring_convert<std::codecvt_utf8_utf16<wchar_t>> converter;
std::wstring wide = converter.from_bytes(narrow_utf8_source_string);
printf-style format specifiers are not part of the C++ library and cannot be used to construct a string.
If the string may only contain single-byte characters, then the range constructor is sufficient.
std::string narrower( "hello" );
std::wstring wider( narrower.begin(), narrower.end() );
The problem is that we usually use wstring when wide characters are applicable (hence the w), which are represented in std::string by multibyte sequences. Doing this will cause each byte of a multibyte sequence to translate to an sequence of incorrect wide characters.
Moreover, to convert a multibyte sequence requires knowing its encoding. This information is not encapsulated by std::string nor std::wstring. C++11 allows you to specify an encoding and translate using std::wstring_convert, but I'm not sure how widely supported it is of yet. See 0x....'s excellent answer.
The converter mentioned for C++11 and above has deprecated this specific conversion in C++17, and suggests using the MultiByteToWideChar function.
The compiler error (c4996) mentions defining _SILENCE_CXX17_CODECVT_HEADER_DEPRECATION_WARNING.
wstring temp = L"";
for (auto c : DATA_DIR)
temp.push_back(c);
I found this function. Could not find any predefined method to do this.
std::wstring s2ws(const std::string& s)
{
int len;
int slength = (int)s.length() + 1;
len = MultiByteToWideChar(CP_ACP, 0, s.c_str(), slength, 0, 0);
wchar_t* buf = new wchar_t[len];
MultiByteToWideChar(CP_ACP, 0, s.c_str(), slength, buf, len);
std::wstring r(buf);
delete[] buf;
return r;
}
std::wstring stemp = s2ws(myString);

C++ TCHAR array to wstring not working in VS2010

I would like to convert a TCHAR array to a wstring.
TCHAR szFileName[MAX_PATH+1];
#ifdef _DEBUG
std::string str="m:\\compiled\\data.dat";
TCHAR *param=new TCHAR[str.size()+1];
szFileName[str.size()]=0;
std::copy(str.begin(),str.end(),szFileName);
#else
//Retrieve the path to the data.dat in the same dir as our data.dll is located
GetModuleFileName(_Module.m_hInst, szFileName, MAX_PATH+1);
StrCpy(PathFindFileName(szFileName), _T("data.dat"));
#endif
wstring sPath(T2W(szFileName));
I need to pass szFileName to a function that expects
const WCHAR *
For completeness I am stating the void that I need to pass szFileName to:
HRESULT CEngObj::MapFile( const WCHAR * pszTokenVal, // Value that contains file path
HANDLE * phMapping, // Pointer to file mapping handle
void ** ppvData ) // Pointer to the data
However, T2W does not work for me. The compiler says that "_lpa" is not defined, and I don't know where to go on from here. I have tried other conversion methods that I found stated on the net, but they did not work either.
There are functions like
mbstowcs_s()
that convert from char* to wchar_t*.
#include <iostream>
#include <stdlib.h>
#include <string>
using namespace std;
char *orig = "Hello, World!";
cout << orig << " (char *)" << endl;
// Convert to a wchar_t*
size_t origsize = strlen(orig) + 1;
const size_t newsize = 100;
size_t convertedChars = 0;
wchar_t wcstring[newsize];
mbstowcs_s(&convertedChars, wcstring, origsize, orig, _TRUNCATE);
wcscat_s(wcstring, L" (wchar_t *)");
wcout << wcstring << endl;
Look here for an article and here for MSDN.
The definition of TCHAR differs depending on if certain preprocessor macros are defined or not. See e.g. this article for the possible combinations.
This means that TCHAR may already be a wchar_t.
You can use the _UNICODE macro to check if you need to convert the string. If you do, then you can use mbstowcs to do the conversion:
std::wstring str;
#ifdef _UNICODE
// No need to convert the string
str = your_tchar_string;
#else
// Need to convert the string
// First get the length needed
int length = mbstowcs(nullptr, your_tchar_string, 0);
// Allocate a temporary string
wchar_t* tmpstr = new wchar_t[length + 1];
// Do the actual conversion
mbstowcs(tmpstr, your_tchar_str, length + 1);
str = tmpstr;
// Free the temporary string
delete[] tmpstr;
#endif

Convert WCHAR[260] to std::string

I have gotten a WCHAR[MAX_PATH] from (PROCESSENTRY32) pe32.szExeFile on Windows. The following do not work:
std::string s;
s = pe32.szExeFile; // compile error. cast (const char*) doesnt work either
and
std::string s;
char DefChar = ' ';
WideCharToMultiByte(CP_ACP,0,pe32.szExeFile,-1, ch,260,&DefChar, NULL);
s = pe32.szExeFile;
For your first example you can just do:
std::wstring s(pe32.szExeFile);
and for second:
char DefChar = ' ';
WideCharToMultiByte(CP_ACP,0,pe32.szExeFile,-1, ch,260,&DefChar, NULL);
std::wstring s(pe32.szExeFile);
as std::wstring has a char* ctor
Your call to WideCharToMultiByte looks correct, provided ch is a
sufficiently large buffer. After than, however, you want to assign the
buffer (ch) to the string (or use it to construct a string), not
pe32.szExeFile.
There are convenient conversion classes from ATL; you may want to use some of them, e.g.:
std::string s( CW2A(pe32.szExeFile) );
Note however that a conversion from Unicode UTF-16 to ANSI can be lossy. If you wan't a non-lossy conversion, you could convert from UTF-16 to UTF-8, and store UTF-8 inside std::string.
If you don't want to use ATL, there are some convenient freely available C++ wrappers around raw Win32 WideCharToMultiByte to convert from UTF-16 to UTF-8 using STL strings.
#ifndef __STRINGCAST_H__
#define __STRINGCAST_H__
#include <vector>
#include <string>
#include <cstring>
#include <cwchar>
#include <cassert>
template<typename Td>
Td string_cast(const wchar_t* pSource, unsigned int codePage = CP_ACP);
#endif // __STRINGCAST_H__
template<>
std::string string_cast( const wchar_t* pSource, unsigned int codePage )
{
assert(pSource != 0);
size_t sourceLength = std::wcslen(pSource);
if(sourceLength > 0)
{
int length = ::WideCharToMultiByte(codePage, 0, pSource, sourceLength, NULL, 0, NULL, NULL);
if(length == 0)
return std::string();
std::vector<char> buffer( length );
::WideCharToMultiByte(codePage, 0, pSource, sourceLength, &buffer[0], length, NULL, NULL);
return std::string(buffer.begin(), buffer.end());
}
else
return std::string();
}
and use this template as followed
PWSTR CurWorkDir;
std::string CurWorkLogFile;
CurWorkDir = new WCHAR[length];
CurWorkLogFile = string_cast<std::string>(CurWorkDir);
....
delete [] CurWorkDir;

c++ convert from LPCTSTR to const char *

I have this problem in MSVC2008 MFC. I´m using unicode. I have a function prototype:
MyFunction(const char *)
and I'm calling it:
MyfunFunction(LPCTSTR wChar).
error:Cannot Convert Parameter 1 From 'LPCTSTR' to 'const char *'
How to resolve it?
Since you're using MFC, you can easily let CString do an automatic conversion from char to TCHAR:
MyFunction(CString(wChar));
This works whether your original string is char or wchar_t based.
Edit: It seems my original answer was opposite of what you asked for. Easily fixed:
MyFunction(CStringA(wChar));
CStringA is a version of CString that specifically contains char characters, not TCHAR. There's also a CStringW which holds wchar_t.
LPCTSTR is a pointer to const TCHAR and TCHAR is WCHAR and WCHAR is most probably wchar_t. Make your function take const wchar_t* if you can, or manually create a const char* buffer, copy the contents, and pass that.
When UNICODE is defined for an MSVC project LPCTSTR is defined as const wchar_t *; simply changing the function signature will not work because whatever code within the function is using the input parameter expects a const char *.
I'd suggest you leave the function signature alone; instead call a conversion function such as WideCharToMultiByte to convert the string before calling your function. If your function is called several times and it is too tedious to add the conversion before every call, create an overload MyFunction(const wchar_t *wChar). This one can then perform the conversion and call the original version with the result.
This may not be totally on topic, but I wrote a couple of generic helper functions for my proposed wmain framework, so perhaps they're useful for someone.
Make sure to call std::setlocale(LC_CTYPE, ""); in your main() before doing any stringy stuff!
#include <string>
#include <vector>
#include <clocale>
#include <cassert>
std::string get_locale_string(const std::wstring & s)
{
const wchar_t * cs = s.c_str();
const size_t wn = wcsrtombs(NULL, &cs, 0, NULL);
if (wn == size_t(-1))
{
std::cout << "Error in wcsrtombs(): " << errno << std::endl;
return "";
}
std::vector<char> buf(wn + 1);
const size_t wn_again = wcsrtombs(&buf[0], &cs, wn + 1, NULL);
if (wn_again == size_t(-1))
{
std::cout << "Error in wcsrtombs(): " << errno << std::endl;
return "";
}
assert(cs == NULL); // successful conversion
return std::string(&buf[0], wn);
}
std::wstring get_wstring(const std::string & s)
{
const char * cs = s.c_str();
const size_t wn = mbsrtowcs(NULL, &cs, 0, NULL);
if (wn == size_t(-1))
{
std::cout << "Error in mbsrtowcs(): " << errno << std::endl;
return L"";
}
std::vector<wchar_t> buf(wn + 1);
const size_t wn_again = mbsrtowcs(&buf[0], &cs, wn + 1, NULL);
if (wn_again == size_t(-1))
{
std::cout << "Error in mbsrtowcs(): " << errno << std::endl;
return L"";
}
assert(cs == NULL); // successful conversion
return std::wstring(&buf[0], wn);
}
You could provide "dummy" overloads:
inline std::string get_locale_string(const std::string & s) { return s; }
inline std::wstring get_wstring(const std::wstring & s) { return s; }
Now, if you have an LPCTSTR x, you can always call get_locale_string(x).c_str() to get a char-string.
If you're curious, here's the rest of the framework:
#include <vector>
std::vector<std::wstring> parse_args_from_char_to_wchar(int argc, char const * const argv[])
{
assert(argc > 0);
std::vector<std::wstring> args;
args.reserve(argc);
for (int i = 0; i < argc; ++i)
{
const std::wstring arg = get_wstring(argv[i]);
if (!arg.empty()) args.push_back(std::move(arg));
}
return args;
}
Now the main() -- your new entry point is always int wmain(const std::vector<std::wstring> args):
#ifdef WIN32
#include <windows.h>
extern "C" int main()
{
std::setlocale(LC_CTYPE, "");
int argc;
wchar_t * const * const argv = ::CommandLineToArgvW(::GetCommandLineW(), &argc);
return wmain(std::vector<std::wstring>(argv, argv + argc));
}
#else // WIN32
extern "C" int main(int argc, char *argv[])
{
LOCALE = std::setlocale(LC_CTYPE, "");
if (LOCALE == NULL)
{
LOCALE = std::setlocale(LC_CTYPE, "en_US.utf8");
}
if (LOCALE == NULL)
{
std::cout << "Failed to set any reasonable locale; not parsing command line arguments." << std::endl;
return wmain(std::vector<std::wstring>());
}
std::cout << "Locale set to " << LOCALE << ". Your character type has "
<< 8 * sizeof(std::wstring::value_type) << " bits." << std::endl;
return wmain(parse_args_from_char_to_wchar(argc, argv));
}
#endif
In this example I convert a LPCTSTR to a const char pointer and a char pointer. For this conversion you need to include windows.h and atlstr.h, I hope to help you.
// Required inclusions
#include <windows.h>
#include <atlstr.h>
// Code
LPCTSTR fileName = L"test.txt";
CStringA stringA(fileName);
const char* constCharP = stringA;
char* charP = const_cast<char*>(constCharP);