how to convert char array to LPCTSTR - c++

I have function like this which takes variable number of argument and constructs
the string and passes it to another function to print the log .
logverbose( const char * format, ... )
{
char buffer[1024];
va_list args;
va_start (args, format);
vsprintf (buffer,format, args);
va_end (args);
LOGWriteEntry( "HERE I NEED TO PASS buffer AS LPCTSTR SO HOW TO CONVERT buffer to LPCTSTR??");
}
Instead of using buffer[1024] is there any other way? since log can be bigger or very smaller . All this am writing in C++ code please let me know if there is better way to do this .....

You can probably just pass it:
LOGWriteEntry (buffer);
If you are using ancient memory models with windows, you might have to explicitly cast it:
LOGWriteEntry ((LPCTSTR) buffer);
correction:
LPCTSTr is Long Pointer to a Const TCHAR STRing. (I overlooked the TCHAR) with the first answer.
You'll have to use the MultiByteToWideChar function to copy buffer to another buffer and pass that to the function:
w_char buf2 [1024];
MultiByteToWideChar (CP_ACP, 0, buffer, -1, buf2, sizeof buf2);
LOGWriteEntry (buf2);

A good way to proceed might be from among these alternatives:
Design the logverbose function to use TCHAR rather than char; or
Find out if the logging API provides a char version of LOGWriteEntry, and use that alternative.
If no char version of LOGWriteEntry exists, extend that API by writing one. Perhaps it can be written as a cut-and-paste clone of LOGWriteEntry, with all TCHAR use replaced by char, and lower-level functions replaced by their ASCII equivalents. For example, if LOGWriteEntry happens to call the Windows API function ReportEvent, your LOGWriteEntryA version could call ReportEventA.
Really, in modern applications, you should just forget about char and just use wchar_t everywhere (compatible with Microsoft's WCHAR). Under Unicode builds, TCHAR becomes WCHAR. Even if you don't provide a translated version of your program (all UI elements and help text is English), a program which uses wide characters can at least input, process and output international text, so it is "halfway there".

Related

Convert ICU Unicode string to std::wstring (or wchar_t*)

Is there an icu function to create a std::wstring from an icu UnicodeString ? I have been searching the ICU manual but haven't been able to find one.
(I know i can convert UnicodeString to UTF8 and then convert to platform dependent wchar_t* but i am looking for one function in UnicodeString which can do this conversion.
The C++ standard doesn't dictate any specific encoding for std::wstring. On Windows systems, wchar_t is 16-bit, and on Linux, macOS, and several other platforms, wchar_t is 32-bit. As far as C++'s std::wstring is concerned, it is just an arbitrary sequence of wchar_t in much the same way that std::string is just an arbitrary sequence of char.
It seems that icu::UnicodeString has no in-built way of creating a std::wstring, but if you really want to create a std::wstring anyway, you can use the C-based API u_strToWCS() like this:
icu::UnicodeString ustr = /* get from somewhere */;
std::wstring wstr;
int32_t requiredSize;
UErrorCode error = U_ZERO_ERROR;
// obtain the size of string we need
u_strToWCS(nullptr, 0, &requiredSize, ustr.getBuffer(), ustr.length(), &error);
// resize accordingly (this will not include any terminating null character, but it also doesn't need to either)
wstr.resize(requiredSize);
// copy the UnicodeString buffer to the std::wstring.
u_strToWCS(wstr.data(), wstr.size(), nullptr, ustr.getBuffer(), ustr.length(), &error);
Supposedly, u_strToWCS() will use the most efficient method for converting from UChar to wchar_t (if they are the same size, then it is just a straightfoward copy I suppose).

Check length char[] before converting to wstring()

I have a api function. I takes a pointer to array char. The calling function is out of my control. Array is dynamic but still need some checking
extern "C" int __stdcall calcW2(LPWSTR foo)
If somebody make a call with
char foo[5000];
LPSTR lpfoo2 = foo;
calcW2(lpfoo2 );
I understand that i need to make some checks. I can test for nulltpr. But if I want to len checking. That the char array has some validity. How is that best done? In the safest way for a string to 0 to 2500 chars. Do need check for something more?
if(foo != nullptr)
{
//Size checking
//size_t newsize = strlen(SerialNumber) + 1 not good?
std::wstring test(foo);
}
You missed one important point. The function signature says LPWSTR not LPSTR. This means that the function expects (or should expect) to receive wchar_t[] not char[]. See https://msdn.microsoft.com/en-us/library/cc230355.aspx.
I mean:
extern "C" int __stdcall calcW2(LPWSTR foo) <--- LP-W-STR
char foo[5000];
LPSTR lpfoo2 = foo; <--- LP-STR
calcW2(lpfoo2 ); <--- LP-STR passed into LP-W-STR ??
that should not compile. Argument types are wrong.
If you change the array to wchar_t[] and it starts to fail to compile, then most probably you have some _UNICODE #defines set wrong. In WINAPI and similar, many functions have dual definitions. When "UNICODE" flag is set, they take LPWSTR, but when the flag is cleared, the headers switch them to taking LPSTR. So if you see that it should be LPWSTR and you want it to be LPWSTR and it insists on being LPSTR, then you either messed up the function names, or UNICODE flag (or the header you have is simply incorrect).
char and wchar_t are different. Simplifying, char is "singlebyte" and wchar_t is "twobyte". Both use '\0' as the end-of-string marker, but in wchar_t that's actually '\0\0' since it's two bytes per character. Also, in wchar_t[] plain ASCII data isn't like a|b|c|d|e|f, it's 0|a|0|b|0|c|0|d|0|e|0|f since it's two bytes per character. That's why the strlen cannot work on 16bit encoded data properly - it picks the first \0 from the first character as end-of-string. Having a wchar_t data forcibly packed into char[] is plainly wrong or at least highly misleading and error-prone.
That's why you should use wstrlen instead, which so happens to take wchar_t* instead of char*.
This is a overall 'rule'. For any function working on char (strlen, strcat, strcmp, ..) you should be able to find relevant w* function (wstrlen, wstrcat, wstrcmp, ..). There may be some underscores in the names sometimes. Search the docs. Don't mix up char types. That't now just byte-array. There is some semantics out there for them, and usually if some types are named differently, there's a reason for that.

Difference between char* and wchar_t*

I am new to MFC. I am trying to do simple mfc application and I'm getting confuse in some places. For example, SetWindowText have two api, SetWindowTextA, SetWindowTextW one api takes char * and another one accepts wchar_t *.
What is the use of char * and wchar_t *?
char is used for so called ANSI family of functions (typically function name ends with A), or more commonly known as using ASCII character set.
wchar_t is used for new so called Unicode (or Wide) family of functions (typically function name ends with W), which use UTF-16 character set. It is very similar to UCS-2, but not quite it. If character requires more than 2 bytes, it will be converted into 2 composite codepoints, and this can be very confusing.
If you want to convert one to another, it is not really simple task. You will need to use something like MultiByteToWideChar, which requires knowing and providing code page for input ANSI string.
On Windows, APIs that take char * use the current code page whereas wchar_t * APIs use UTF-16. As a result, you should always use wchar_t on Windows. A recommended way to do this is to:
// Be sure to define this BEFORE including <windows.h>
#define UNICODE 1
#include <windows.h>
When UNICODE is defined, APIs like SetWindowText will be aliased to SetWindowTextW and can therefore be used safely. Without UNICODE, SetWindowText will be aliased to SetWindowTextA and therefore cannot be used without first converting to the current code page.
However, there's no good reason to use wchar_t when you are not calling Windows APIs, since its portable functionality is not useful, and its useful functionality is not portable (wchar_t is UTF-16 only on Windows, on most other platforms it is UTF-32, what a total mess.)
SetWindowTextA takes char*, which is a pointer to ANSI strings.
SetWindowTextW takes wchar_t*, which is a pointer to "wide" strings (Unicode).
SetWindowText has been defined (#define) to either of these in header Windows.h based on the type of application you are building. If you are building a UNICODE build then your code will automatically use SetWindowTextW.
SetWindowTextA is there primarily to support legacy code, which needs to be built as SBCS (Single byte character set).
char* : It means that this is a pointer to data of type char.
Example
// Regular char
char aChar = 'a';
// Pointer to char
char* aPointer = new char;
*aPointer = 'a';
// Pointer to an array of 10 chars
char* anArray = new char[ 10 ];
*anArray = 'a';
anArray[ 1 ] = 'b';
// Also a pointer to an array of 10
char[] anArray = new char[ 10 ];
*anArray = 'a';
anArray[ 1 ] = 'b';
wchar_t* : wchar_t is defined such that any locale's char encoding can be converted to a wchar_t representation where every wchar_t represents exactly one codepoint.

How to convert tchar pointer to char pointer

I want to conver a tchar* to char * is this possible . if yes how to do it. I use unicode setting
A TCHAR is either a plain char or a wchar_t depending on your project's settings. If it's the latter, you would need to use WideCharToMultiByte with appropriate code page parameter.
You can't convert the pointer, you need to allocate a new string that is "char" instead of "wchar_t"
the most elegant way to do this is with the ATL conversion macros because it will hide all the allocation and called to the functions mentioned in the other comments
example
#include <atlbase.h>
#include <atlconv.h>
void YourFunction()
{
TCHAR wszHelloTchar = _T("Hello!\n");
USES_CONVERSION; // a macro required once before using T2A() and other ATL macros
// printf require a char*
// T2A converts (if necessary) the TCHAR string to char
printf( T2A( wszHelloTchar ) );
}
I find wcstombs works great for doing this sort of thing,

Convert CString to character array?

How to convert CString in MFC to char[] (character array)
You use CString::GetBuffer() to get the TCHAR[] - the pointer to the buffer. If you compiled without UNICODE defined that's enough - TCHAR is same as char, otherwise you'll have to allocate a separate buffer and use WideCharToMultiByte() for conversion.
I struggled with this, but what I use now is this: (UNICODE friendly)
CString strCommand("My Text to send to DLL.");
**
char strPass[256];
strcpy_s( strPass, CStringA(strCommand).GetString() );
**
// CStringA is a non-wide/unicode character version of CString
This will then put your null terminated char array in strPass for you.
Also, if you control the DLL on the other side, specifying your parameters as:
const char* strParameter
rather than
char strParameter*
will "likely" convert CStrings for you with the default casting generally being effective.
You can use GetBuffer function to get the character buffer from CString.
Calling only the GetBuffer method is not sufficient, you'll need too copy this buffer to the array.
For example:
CString sPath(_T("C:\temp\"));
TCHAR tcPath[MAX_PATH];
_tcscpy(szDisplayName, sPath.GetBuffer(MAX_PATH));
As noted elsewhere, if You need to port CString for warning C4840: non-portable f.
The quick, Unicode && Multibyte striong conversion is using:
static_cast
sample:
//was: Str1.Format( szBuffer, m_strName );
Str1.Format(szBuffer, static_cast<LPCTSTR>(m_strName) );