CString output ;
const WCHAR* wc = L"Hellow World" ;
if( wc != NULL )
{
output.Append(wc);
}
printf( "output: %s\n",output.GetBuffer(0) );
you can also try this:
#include <comdef.h> // you will need this
const WCHAR* wc = L"Hello World" ;
_bstr_t b(wc);
const char* c = b;
printf("Output: %s\n", c);
_bstr_t implements following conversion operators, which I find quite handy:
operator const wchar_t*( ) const throw( );
operator wchar_t*( ) const throw( );
operator const char*( ) const;
operator char*( ) const;
EDIT: clarification with regard to answer comments: line const char* c = b; results in a narrow character copy of the string being created and managed by the _bstr_t instance which will release it once when it is destroyed. The operator just returns a pointer to this copy. Therefore, there is no need to copy this string. Besides, in the question, CString::GetBuffer returns LPTSTR (i.e. TCHAR*) and not LPCTSTR (i.e. const TCHAR*).
Another option is to use conversion macros:
USES_CONVERSION;
const WCHAR* wc = L"Hello World" ;
const char* c = W2A(wc);
The problem with this approach is that the memory for converted string is allocated on stack, so the length of the string is limited. However, this family of conversion macros allow you to select the code page which is to be used for the conversion, which is often needed if wide string contains non-ANSI characters.
You can use sprintf for this purpose:
const char output[256];
const WCHAR* wc = L"Hellow World" ;
sprintf(output, "%ws", wc );
My code for Linux
// Debian GNU/Linux 8 "Jessie" (amd64)
#include <locale.h>
#include <stdlib.h>
#include <stdio.h>
// Use wcstombs(3) to convert Unicode-string (wchar_t *) to UTF-8 (char *)
// http://man7.org/linux/man-pages/man3/wcstombs.3.html
int f(const wchar_t *wcs) {
setlocale(LC_ALL,"ru_RU.UTF-8");
printf("Sizeof wchar_t: %d\n", sizeof(wchar_t));
// on Windows, UTF-16 is internal Unicode encoding (UCS2 before WinXP)
// on Linux, UCS4 is internal Unicode encoding
for (int i = 0; wcs[i] > 0; i++) printf("%2d %08X\n",i,wcs[i]);
char s[256];
size_t len = wcstombs(s,wcs,sizeof(s));
if (len > 0) {
s[len] = '\0';
printf("mbs: %s\n",s);
for (int i = 0; i < len; i++)
printf("%2d %02X\n",i,(unsigned char)s[i]);
printf("Size of mbs, in bytes: %d\n",len);
return 0;
}
else return -1;
}
int main() {
f(L"Привет"); // 6 symbols
return 0;
}
How to build
#!/bin/sh
NAME=`basename $0 .sh`
CC=/usr/bin/g++-4.9
INCS="-I."
LIBS="-L."
$CC ${NAME}.c -o _${NAME} $INCS $LIBS
Output
$ ./_test
Sizeof wchar_t: 4
0 0000041F
1 00000440
2 00000438
3 00000432
4 00000435
5 00000442
mbs: Привет
0 D0
1 9F
2 D1
3 80
4 D0
5 B8
6 D0
7 B2
8 D0
9 B5
10 D1
11 82
Size of mbs, in bytes: 12
It's quite easy, because CString is just a typedef for CStringT, and you also have access to CStringA and CStringW (you should read the documentation about the differences).
CStringW myString = L"Hello World";
CString myConvertedString = myString;
You could do this, or you could do something cleaner:
std::wcout << L"output: " << output.GetString() << std::endl;
You can use the std::wcsrtombs function.
Here is a C++17 overload set for conversion:
#include <iostream> // not required for the conversion function
// required for conversion
#include <cuchar>
#include <cwchar>
#include <stdexcept>
#include <string>
#include <string_view> // for std::wstring_view overload
std::string to_string(wchar_t const* wcstr){
auto s = std::mbstate_t();
auto const target_char_count = std::wcsrtombs(nullptr, &wcstr, 0, &s);
if(target_char_count == static_cast<std::size_t>(-1)){
throw std::logic_error("Illegal byte sequence");
}
// +1 because std::string adds a null terminator which isn't part of size
auto str = std::string(target_char_count, '\0');
std::wcsrtombs(str.data(), &wcstr, str.size() + 1, &s);
return str;
}
std::string to_string(std::wstring const& wstr){
return to_string(wstr.c_str());
}
std::string to_string(std::wstring_view const& view){
// wstring because wstring_view is not required to be null-terminated!
return to_string(std::wstring(view));
}
int main(){
using namespace std::literals;
std::cout
<< to_string(L"wchar_t const*") << "\n"
<< to_string(L"std::wstring"s) << "\n"
<< to_string(L"std::wstring_view"sv) << "\n";
}
If you use Pre-C++17, you should urgently update your compiler! ;-)
If this is really not possible, here is a C++11 version:
#include <iostream> // not required for the conversion function
// required for conversion
#include <cwchar>
#include <stdexcept>
#include <string>
std::string to_string(wchar_t const* wcstr){
auto s = std::mbstate_t();
auto const target_char_count = std::wcsrtombs(nullptr, &wcstr, 0, &s);
if(target_char_count == static_cast<std::size_t>(-1)){
throw std::logic_error("Illegal byte sequence");
}
// +1 because std::string adds a null terminator which isn't part of size
auto str = std::string(target_char_count, '\0');
std::wcsrtombs(const_cast<char*>(str.data()), &wcstr, str.size() + 1, &s);
return str;
}
std::string to_string(std::wstring const& wstr){
return to_string(wstr.c_str());
}
int main(){
std::cout
<< to_string(L"wchar_t const*") << "\n"
<< to_string(std::wstring(L"std::wstring")) << "\n";
}
You can use sprintf for this purpose, as #l0pan mentions (but I used %ls instead of %ws):
char output[256];
const WCHAR* wc = L"Hello World" ;
sprintf(output, "%ws", wc ); // did not work for me (Windows, C++ Builder)
sprintf(output, "%ls", wc ); // works
Related
I want to convert wstring to u16string in C++.
I can convert wstring to string, or reverse. But I don't know how convert to u16string.
u16string CTextConverter::convertWstring2U16(wstring str)
{
int iSize;
u16string szDest[256] = {};
memset(szDest, 0, 256);
iSize = WideCharToMultiByte(CP_UTF8, NULL, str.c_str(), -1, NULL, 0,0,0);
WideCharToMultiByte(CP_UTF8, NULL, str.c_str(), -1, szDest, iSize,0,0);
u16string s16 = szDest;
return s16;
}
Error in WideCharToMultiByte(CP_UTF8, NULL, str.c_str(), -1, szDest, iSize,0,0);'s szDest. Cause of u16string can't use with LPSTR.
How can I fix this code?
For a platform-independent solution see this answer.
If you need a solution only for the Windows platform, the following code will be sufficient:
std::wstring wstr( L"foo" );
std::u16string u16str( wstr.begin(), wstr.end() );
On the Windows platform, a std::wstring is interchangeable with std::u16string because sizeof(wstring::value_type) == sizeof(u16string::value_type) and both are UTF-16 (little endian) encoded.
wstring::value_type = wchar_t
u16string::value_type = char16_t
The only difference is that wchar_t is signed, whereas char16_t is unsigned. So you only have to do sign conversion, which can be performed by using the u16string constructor that takes an iterator pair as arguments. This constructor will implicitly convert wchar_t to char16_t.
Full example console application:
#include <windows.h>
#include <string>
int main()
{
static_assert( sizeof(std::wstring::value_type) == sizeof(std::u16string::value_type),
"std::wstring and std::u16string are expected to have the same character size" );
std::wstring wstr( L"foo" );
std::u16string u16str( wstr.begin(), wstr.end() );
// The u16string constructor performs an implicit conversion like:
wchar_t wch = L'A';
char16_t ch16 = wch;
// Need to reinterpret_cast because char16_t const* is not implicitly convertible
// to LPCWSTR (aka wchar_t const*).
::MessageBoxW( 0, reinterpret_cast<LPCWSTR>( u16str.c_str() ), L"test", 0 );
return 0;
}
Update
I had thought the standard version did not work, but in fact this was simply due to bugs in the Visual C++ and libstdc++ 3.4.21 runtime libraries. It does work with clang++ -std=c++14 -stdlib=libc++. Here is a version that tests whether the standard method works on your compiler:
#include <codecvt>
#include <cstdlib>
#include <cstring>
#include <cwctype>
#include <iostream>
#include <locale>
#include <clocale>
#include <vector>
using std::cout;
using std::endl;
using std::exit;
using std::memcmp;
using std::size_t;
using std::wcout;
#if _WIN32 || _WIN64
// Windows needs a little non-standard magic for this to work.
#include <io.h>
#include <fcntl.h>
#include <locale.h>
#endif
using std::size_t;
void init_locale(void)
// Does magic so that wcout can work.
{
#if _WIN32 || _WIN64
// Windows needs a little non-standard magic.
constexpr char cp_utf16le[] = ".1200";
setlocale( LC_ALL, cp_utf16le );
_setmode( _fileno(stdout), _O_U16TEXT );
#else
// The correct locale name may vary by OS, e.g., "en_US.utf8".
constexpr char locale_name[] = "";
std::locale::global(std::locale(locale_name));
std::wcout.imbue(std::locale());
#endif
}
int main(void)
{
constexpr char16_t msg_utf16[] = u"¡Hola, mundo! \U0001F600"; // Shouldn't assume endianness.
constexpr wchar_t msg_w[] = L"¡Hola, mundo! \U0001F600";
constexpr char32_t msg_utf32[] = U"¡Hola, mundo! \U0001F600";
constexpr char msg_utf8[] = u8"¡Hola, mundo! \U0001F600";
init_locale();
const std::codecvt_utf16<wchar_t, 0x1FFFF, std::little_endian> converter_w;
const size_t max_len = sizeof(msg_utf16);
std::vector<char> out(max_len);
std::mbstate_t state;
const wchar_t* from_w = nullptr;
char* to_next = nullptr;
converter_w.out( state, msg_w, msg_w+sizeof(msg_w)/sizeof(wchar_t), from_w, out.data(), out.data() + out.size(), to_next );
if (memcmp( msg_utf8, out.data(), sizeof(msg_utf8) ) == 0 ) {
wcout << L"std::codecvt_utf16<wchar_t> converts to UTF-8, not UTF-16!" << endl;
} else if ( memcmp( msg_utf16, out.data(), max_len ) != 0 ) {
wcout << L"std::codecvt_utf16<wchar_t> conversion not equal!" << endl;
} else {
wcout << L"std::codecvt_utf16<wchar_t> conversion is correct." << endl;
}
out.clear();
out.resize(max_len);
const std::codecvt_utf16<char32_t, 0x1FFFF, std::little_endian> converter_u32;
const char32_t* from_u32 = nullptr;
converter_u32.out( state, msg_utf32, msg_utf32+sizeof(msg_utf32)/sizeof(char32_t), from_u32, out.data(), out.data() + out.size(), to_next );
if ( memcmp( msg_utf16, out.data(), max_len ) != 0 ) {
wcout << L"std::codecvt_utf16<char32_t> conversion not equal!" << endl;
} else {
wcout << L"std::codecvt_utf16<char32_t> conversion is correct." << endl;
}
wcout << msg_w << endl;
return EXIT_SUCCESS;
}
Previous
A bit late to the game, but here’s a version that additionally checks whether wchar_t is 32-bits (as it is on Linux), and if so, performs surrogate-pair conversion. I recommend saving this source as UTF-8 with a BOM. Here is a link to it on ideone.
#include <cassert>
#include <cwctype>
#include <cstdlib>
#include <iomanip>
#include <iostream>
#include <locale>
#include <string>
#if _WIN32 || _WIN64
// Windows needs a little non-standard magic for this to work.
#include <io.h>
#include <fcntl.h>
#include <locale.h>
#endif
using std::size_t;
void init_locale(void)
// Does magic so that wcout can work.
{
#if _WIN32 || _WIN64
// Windows needs a little non-standard magic.
constexpr char cp_utf16le[] = ".1200";
setlocale( LC_ALL, cp_utf16le );
_setmode( _fileno(stdout), _O_U16TEXT );
#else
// The correct locale name may vary by OS, e.g., "en_US.utf8".
constexpr char locale_name[] = "";
std::locale::global(std::locale(locale_name));
std::wcout.imbue(std::locale());
#endif
}
std::u16string make_u16string( const std::wstring& ws )
/* Creates a UTF-16 string from a wide-character string. Any wide characters
* outside the allowed range of UTF-16 are mapped to the sentinel value U+FFFD,
* per the Unicode documentation. (http://www.unicode.org/faq/private_use.html
* retrieved 12 March 2017.) Unpaired surrogates in ws are also converted to
* sentinel values. Noncharacters, however, are left intact. As a fallback,
* if wide characters are the same size as char16_t, this does a more trivial
* construction using that implicit conversion.
*/
{
/* We assume that, if this test passes, a wide-character string is already
* UTF-16, or at least converts to it implicitly without needing surrogate
* pairs.
*/
if ( sizeof(wchar_t) == sizeof(char16_t) ) {
return std::u16string( ws.begin(), ws.end() );
} else {
/* The conversion from UTF-32 to UTF-16 might possibly require surrogates.
* A surrogate pair suffices to represent all wide characters, because all
* characters outside the range will be mapped to the sentinel value
* U+FFFD. Add one character for the terminating NUL.
*/
const size_t max_len = 2 * ws.length() + 1;
// Our temporary UTF-16 string.
std::u16string result;
result.reserve(max_len);
for ( const wchar_t& wc : ws ) {
const std::wint_t chr = wc;
if ( chr < 0 || chr > 0x10FFFF || (chr >= 0xD800 && chr <= 0xDFFF) ) {
// Invalid code point. Replace with sentinel, per Unicode standard:
constexpr char16_t sentinel = u'\uFFFD';
result.push_back(sentinel);
} else if ( chr < 0x10000UL ) { // In the BMP.
result.push_back(static_cast<char16_t>(wc));
} else {
const char16_t leading = static_cast<char16_t>(
((chr-0x10000UL) / 0x400U) + 0xD800U );
const char16_t trailing = static_cast<char16_t>(
((chr-0x10000UL) % 0x400U) + 0xDC00U );
result.append({leading, trailing});
} // end if
} // end for
/* The returned string is shrunken to fit, which might not be the Right
* Thing if there is more to be added to the string.
*/
result.shrink_to_fit();
// We depend here on the compiler to optimize the move constructor.
return result;
} // end if
// Not reached.
}
int main(void)
{
static const std::wstring wtest(L"☪☮∈✡℩☯✝ \U0001F644");
static const std::u16string u16test(u"☪☮∈✡℩☯✝ \U0001F644");
const std::u16string converted = make_u16string(wtest);
init_locale();
std::wcout << L"sizeof(wchar_t) == " << sizeof(wchar_t) << L".\n";
for( size_t i = 0; i <= u16test.length(); ++i ) {
if ( u16test[i] != converted[i] ) {
std::wcout << std::hex << std::showbase
<< std::right << std::setfill(L'0')
<< std::setw(4) << (unsigned)converted[i] << L" ≠ "
<< std::setw(4) << (unsigned)u16test[i] << L" at "
<< i << L'.' << std::endl;
return EXIT_FAILURE;
} // end if
} // end for
std::wcout << wtest << std::endl;
return EXIT_SUCCESS;
}
Footnote
Since someone asked: The reason I suggest UTF-8 with BOM is that some compilers, including MSVC 2015, will assume a source file is encoded according to the current code page unless there is a BOM or you specify an encoding on the command line. No encoding works on all toolchains, unfortunately, but every tool I’ve used that’s modern enough to support C++14 also understands the BOM.
- To convert CString to std:wstring and string
string CString2string(CString str)
{
int bufLen = WideCharToMultiByte(CP_UTF8, 0, (LPCTSTR)str, -1, NULL, 0, NULL,NULL);
char *buf = new char[bufLen];
WideCharToMultiByte(CP_UTF8, 0, (LPCTSTR)str, -1, buf, bufLen, NULL, NULL);
string sRet(buf);
delete[] buf;
return sRet;
}
CString strFileName = "test.txt";
wstring wFileName(strFileName.GetBuffer());
strFileName.ReleaseBuffer();
string sFileName = CString2string(strFileName);
- To convert string to CString
CString string2CString(string s)
{
int bufLen = MultiByteToWideChar(CP_ACP, 0, s.c_str(), -1, NULL, 0);
WCHAR *buf = new WCHAR[bufLen];
MultiByteToWideChar(CP_ACP, 0, s.c_str(), -1, buf, bufLen);
CString strRet(buf);
delete[] buf;
return strRet;
}
string sFileName = "test.txt";
CString strFileName = string2CString(sFileName);
I'm new to C++ and I have this issue. I have a string called DATA_DIR that I need for format into a wstring.
string str = DATA_DIR;
std::wstring temp(L"%s",str);
Visual Studio tells me that there is no instance of constructor that matches with the argument list. Clearly, I'm doing something wrong.
I found this example online
std::wstring someText( L"hello world!" );
which apparently works (no compile errors). My question is, how do I get the string value stored in DATA_DIR into the wstring constructor as opposed to something arbitrary like "hello world"?
Here is an implementation using wcstombs (Updated):
#include <iostream>
#include <cstdlib>
#include <string>
std::string wstring_from_bytes(std::wstring const& wstr)
{
std::size_t size = sizeof(wstr.c_str());
char *str = new char[size];
std::string temp;
std::wcstombs(str, wstr.c_str(), size);
temp = str;
delete[] str;
return temp;
}
int main()
{
std::wstring wstr = L"abcd";
std::string str = wstring_from_bytes(wstr);
}
Here is a demo.
This is in reference to the most up-voted answer but I don't have enough "reputation" to just comment directly on the answer.
The name of the function in the solution "wstring_from_bytes" implies it is doing what the original poster wants, which is to get a wstring given a string, but the function is actually doing the opposite of what the original poster asked for and would more accurately be named "bytes_from_wstring".
To convert from string to wstring, the wstring_from_bytes function should use mbstowcs not wcstombs
#define _CRT_SECURE_NO_WARNINGS
#include <iostream>
#include <cstdlib>
#include <string>
std::wstring wstring_from_bytes(std::string const& str)
{
size_t requiredSize = 0;
std::wstring answer;
wchar_t *pWTempString = NULL;
/*
* Call the conversion function without the output buffer to get the required size
* - Add one to leave room for the NULL terminator
*/
requiredSize = mbstowcs(NULL, str.c_str(), 0) + 1;
/* Allocate the output string (Add one to leave room for the NULL terminator) */
pWTempString = (wchar_t *)malloc( requiredSize * sizeof( wchar_t ));
if (pWTempString == NULL)
{
printf("Memory allocation failure.\n");
}
else
{
// Call the conversion function with the output buffer
size_t size = mbstowcs( pWTempString, str.c_str(), requiredSize);
if (size == (size_t) (-1))
{
printf("Couldn't convert string\n");
}
else
{
answer = pWTempString;
}
}
if (pWTempString != NULL)
{
delete[] pWTempString;
}
return answer;
}
int main()
{
std::string str = "abcd";
std::wstring wstr = wstring_from_bytes(str);
}
Regardless, this is much more easily done in newer versions of the standard library (C++ 11 and newer)
#include <locale>
#include <codecvt>
#include <string>
std::wstring_convert<std::codecvt_utf8_utf16<wchar_t>> converter;
std::wstring wide = converter.from_bytes(narrow_utf8_source_string);
printf-style format specifiers are not part of the C++ library and cannot be used to construct a string.
If the string may only contain single-byte characters, then the range constructor is sufficient.
std::string narrower( "hello" );
std::wstring wider( narrower.begin(), narrower.end() );
The problem is that we usually use wstring when wide characters are applicable (hence the w), which are represented in std::string by multibyte sequences. Doing this will cause each byte of a multibyte sequence to translate to an sequence of incorrect wide characters.
Moreover, to convert a multibyte sequence requires knowing its encoding. This information is not encapsulated by std::string nor std::wstring. C++11 allows you to specify an encoding and translate using std::wstring_convert, but I'm not sure how widely supported it is of yet. See 0x....'s excellent answer.
The converter mentioned for C++11 and above has deprecated this specific conversion in C++17, and suggests using the MultiByteToWideChar function.
The compiler error (c4996) mentions defining _SILENCE_CXX17_CODECVT_HEADER_DEPRECATION_WARNING.
wstring temp = L"";
for (auto c : DATA_DIR)
temp.push_back(c);
I found this function. Could not find any predefined method to do this.
std::wstring s2ws(const std::string& s)
{
int len;
int slength = (int)s.length() + 1;
len = MultiByteToWideChar(CP_ACP, 0, s.c_str(), slength, 0, 0);
wchar_t* buf = new wchar_t[len];
MultiByteToWideChar(CP_ACP, 0, s.c_str(), slength, buf, len);
std::wstring r(buf);
delete[] buf;
return r;
}
std::wstring stemp = s2ws(myString);
I have this problem in MSVC2008 MFC. I´m using unicode. I have a function prototype:
MyFunction(const char *)
and I'm calling it:
MyfunFunction(LPCTSTR wChar).
error:Cannot Convert Parameter 1 From 'LPCTSTR' to 'const char *'
How to resolve it?
Since you're using MFC, you can easily let CString do an automatic conversion from char to TCHAR:
MyFunction(CString(wChar));
This works whether your original string is char or wchar_t based.
Edit: It seems my original answer was opposite of what you asked for. Easily fixed:
MyFunction(CStringA(wChar));
CStringA is a version of CString that specifically contains char characters, not TCHAR. There's also a CStringW which holds wchar_t.
LPCTSTR is a pointer to const TCHAR and TCHAR is WCHAR and WCHAR is most probably wchar_t. Make your function take const wchar_t* if you can, or manually create a const char* buffer, copy the contents, and pass that.
When UNICODE is defined for an MSVC project LPCTSTR is defined as const wchar_t *; simply changing the function signature will not work because whatever code within the function is using the input parameter expects a const char *.
I'd suggest you leave the function signature alone; instead call a conversion function such as WideCharToMultiByte to convert the string before calling your function. If your function is called several times and it is too tedious to add the conversion before every call, create an overload MyFunction(const wchar_t *wChar). This one can then perform the conversion and call the original version with the result.
This may not be totally on topic, but I wrote a couple of generic helper functions for my proposed wmain framework, so perhaps they're useful for someone.
Make sure to call std::setlocale(LC_CTYPE, ""); in your main() before doing any stringy stuff!
#include <string>
#include <vector>
#include <clocale>
#include <cassert>
std::string get_locale_string(const std::wstring & s)
{
const wchar_t * cs = s.c_str();
const size_t wn = wcsrtombs(NULL, &cs, 0, NULL);
if (wn == size_t(-1))
{
std::cout << "Error in wcsrtombs(): " << errno << std::endl;
return "";
}
std::vector<char> buf(wn + 1);
const size_t wn_again = wcsrtombs(&buf[0], &cs, wn + 1, NULL);
if (wn_again == size_t(-1))
{
std::cout << "Error in wcsrtombs(): " << errno << std::endl;
return "";
}
assert(cs == NULL); // successful conversion
return std::string(&buf[0], wn);
}
std::wstring get_wstring(const std::string & s)
{
const char * cs = s.c_str();
const size_t wn = mbsrtowcs(NULL, &cs, 0, NULL);
if (wn == size_t(-1))
{
std::cout << "Error in mbsrtowcs(): " << errno << std::endl;
return L"";
}
std::vector<wchar_t> buf(wn + 1);
const size_t wn_again = mbsrtowcs(&buf[0], &cs, wn + 1, NULL);
if (wn_again == size_t(-1))
{
std::cout << "Error in mbsrtowcs(): " << errno << std::endl;
return L"";
}
assert(cs == NULL); // successful conversion
return std::wstring(&buf[0], wn);
}
You could provide "dummy" overloads:
inline std::string get_locale_string(const std::string & s) { return s; }
inline std::wstring get_wstring(const std::wstring & s) { return s; }
Now, if you have an LPCTSTR x, you can always call get_locale_string(x).c_str() to get a char-string.
If you're curious, here's the rest of the framework:
#include <vector>
std::vector<std::wstring> parse_args_from_char_to_wchar(int argc, char const * const argv[])
{
assert(argc > 0);
std::vector<std::wstring> args;
args.reserve(argc);
for (int i = 0; i < argc; ++i)
{
const std::wstring arg = get_wstring(argv[i]);
if (!arg.empty()) args.push_back(std::move(arg));
}
return args;
}
Now the main() -- your new entry point is always int wmain(const std::vector<std::wstring> args):
#ifdef WIN32
#include <windows.h>
extern "C" int main()
{
std::setlocale(LC_CTYPE, "");
int argc;
wchar_t * const * const argv = ::CommandLineToArgvW(::GetCommandLineW(), &argc);
return wmain(std::vector<std::wstring>(argv, argv + argc));
}
#else // WIN32
extern "C" int main(int argc, char *argv[])
{
LOCALE = std::setlocale(LC_CTYPE, "");
if (LOCALE == NULL)
{
LOCALE = std::setlocale(LC_CTYPE, "en_US.utf8");
}
if (LOCALE == NULL)
{
std::cout << "Failed to set any reasonable locale; not parsing command line arguments." << std::endl;
return wmain(std::vector<std::wstring>());
}
std::cout << "Locale set to " << LOCALE << ". Your character type has "
<< 8 * sizeof(std::wstring::value_type) << " bits." << std::endl;
return wmain(parse_args_from_char_to_wchar(argc, argv));
}
#endif
In this example I convert a LPCTSTR to a const char pointer and a char pointer. For this conversion you need to include windows.h and atlstr.h, I hope to help you.
// Required inclusions
#include <windows.h>
#include <atlstr.h>
// Code
LPCTSTR fileName = L"test.txt";
CStringA stringA(fileName);
const char* constCharP = stringA;
char* charP = const_cast<char*>(constCharP);
I am using some cross platform stuff called nutcracker to go between Windows and Linux, to make a long story short its limited in its support for wide string chars. I have to take the code below and replace what the swprintf is doing and I have no idea how. My experience with low level byte manipulation sucks. Can someone please help me with this?
Please keep in mind I can't go crazy and re-write swprintf but get the basic functionality to format the pwszString correctly from the data in pBuffer. This is c++ using the Microsoft vc6.0 compiler but through CXX so it's limited as well.
The wszSep is just a delimeter, either "" or "-" for readabilty when printing.
HRESULT BufferHelper::Buff2StrASCII(
/*[in]*/ const unsigned char * pBuffer,
/*[in]*/ int iSize,
/*[in]*/ LPWSTR wszSep,
/*[out]*/ LPWSTR* pwszString )
{
// Check args
if (! pwszString) return E_POINTER;
// Allocate memory
int iSep = (int)wcslen(wszSep);
*pwszString = new WCHAR [ (((iSize * ( 2 + iSep )) + 1 ) - iSep ) ];
if (! pwszString) return E_OUTOFMEMORY;
// Loop
int i = 0;
for (i=0; i< iSize; i++)
{
swprintf( (*pwszString)+(i*(2+iSep)), L"%02X%s", pBuffer[i], (i!=(iSize-1)) ? wszSep : L"" );
}
return S_OK;
}
This takes whats in the pBuffer and encodes the wide buffer with ascii. I use typedef const unsigned short* LPCWSTR; because that type does not exist in the nutcracker.
I can post more if you need to see more code.
Thanks.
It is a bit hard to understand exactly what you are looking for, so I've guessed.
As the tag was "C++", not "C" I have converted it to work in a more "C++" way. I don't have a linux box to try this on, but I think it will probably compile OK.
Your description of the input data sounded like UTF-16 wide characters, so I've used a std::wstring for the input buffer. If that is wrong, change it to a std::vector of unsigned chars and adjust the formatting statement accordingly.
#include <string>
#include <vector>
#include <cerrno>
#include <iostream>
#include <iomanip>
#include <sstream>
#if !defined(S_OK)
#define S_OK 0
#define E_OUTOFMEMORY ENOMEM
#endif
unsigned Buff2StrASCII(
const std::wstring &sIn,
const std::wstring &sSep,
std::wstring &sOut)
{
try
{
std::wostringstream os;
for (size_t i=0; i< sIn.size(); i++)
{
if (i)
os << sSep;
os << std::setw(4) << std::setfill(L'0') << std::hex
<< static_cast<unsigned>(sIn[i]);
}
sOut = os.str();
return S_OK;
}
catch (std::bad_alloc &)
{
return E_OUTOFMEMORY;
}
}
int main(int argc, char *argv[])
{
wchar_t szIn[] = L"The quick brown fox";
std::wstring sOut;
Buff2StrASCII(szIn, L" - ", sOut);
std::wcout << sOut << std::endl;
return 0;
}
Why would you use the WSTR types at all in nutcracker / the linux build? Most unixes and linux use utf-8 for their filesystem representation, so, in the non windows builds, you use sprintf, and char*'s.
char cmd[40];
driver = FuncGetDrive(driver);
sprintf_s(cmd, "%c:\\test.exe", driver);
I cannot use cmd in
sei.lpFile = cmad;
so,
how to convert char array to wchar_t array ?
Just use this:
static wchar_t* charToWChar(const char* text)
{
const size_t size = strlen(text) + 1;
wchar_t* wText = new wchar_t[size];
mbstowcs(wText, text, size);
return wText;
}
Don't forget to call delete [] wCharPtr on the return result when you're done, otherwise this is a memory leak waiting to happen if you keep calling this without clean-up. Or use a smart pointer like the below commenter suggests.
Or use standard strings, like as follows:
#include <cstdlib>
#include <cstring>
#include <string>
static std::wstring charToWString(const char* text)
{
const size_t size = std::strlen(text);
std::wstring wstr;
if (size > 0) {
wstr.resize(size);
std::mbstowcs(&wstr[0], text, size);
}
return wstr;
}
From MSDN:
#include <iostream>
#include <stdlib.h>
#include <string>
using namespace std;
using namespace System;
int main()
{
char *orig = "Hello, World!";
cout << orig << " (char *)" << endl;
// Convert to a wchar_t*
size_t origsize = strlen(orig) + 1;
const size_t newsize = 100;
size_t convertedChars = 0;
wchar_t wcstring[newsize];
mbstowcs_s(&convertedChars, wcstring, origsize, orig, _TRUNCATE);
wcscat_s(wcstring, L" (wchar_t *)");
wcout << wcstring << endl;
}
From your example using swprintf_s would work
wchar_t wcmd[40];
driver = FuncGetDrive(driver);
swprintf_s(wcmd, "%C:\\test.exe", driver);
Note the C in %C has to be written with uppercase since driver is a normal char and not a wchar_t.
Passing your string to swprintf_s(wcmd,"%S",cmd) should also work