I created a basic Windows C++ application in Visual Studio 2015 and I have a few errors:
#include <windows.h>
#include <stdlib.h>
#include <string.h>
#include <tchar.h>
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow)
{
MessageBox(NULL, "Test_text", "Message Test", MB_ICONINFORMATION | MB_OKCANCEL);
return 0;
}
Errors:
'int MessageBoxW(HWND,LPCWSTR,LPCWSTR,UNIT)': cannot convert argument 2 from
'const char [10]' to 'LPCWSTR'
argument of type "const char *" is incompatible with parameter of type "LPCWSTR"
argument of type "const char *" is incompatible with parameter of type "LPCWSTR"
You choose to use ANSI text, so you should use MessageBoxA explicitly instead of macro MessageBox.
#include <windows.h>
#include <stdlib.h>
#include <string.h>
#include <tchar.h>
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow)
{
MessageBoxA(NULL, "Test_text", "Message Test", MB_ICONINFORMATION | MB_OKCANCEL);
return 0;
}
Alternatively, you may useTEXT macro to have the compiler automatically match type of strings and functions.
#include <windows.h>
#include <stdlib.h>
#include <string.h>
#include <tchar.h>
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow)
{
MessageBox(NULL, TEXT("Test_text"), TEXT("Message Test"), MB_ICONINFORMATION | MB_OKCANCEL);
return 0;
}
The problem here is the Win32 TCHAR model.
There's actually no MessageBox function: MessageBox is a preprocessor #define, that expands to MessageBoxA or MessageBoxW, based on your project settings (ANSI/MBCS or Unicode, respectively).
Starting with VS2005, the default setting in Visual Studio has been Unicode (to be more precise: UTF-16). So the MessageBoxW API (i.e. the Unicode version) is picked in this case by the compiler.
The MessageBoxW API takes Unicode (UTF-16) strings, represented via wchar_t pointers (the obscure LPCWSTR preprocessor macro is expanded to const wchar_t*, i.e. a NUL-terminated C-style Unicode UTF-16 string).
Unicode (UTF-16) string literals are represented using the L"..." syntax (note the L prefix).
So, while "Test_text" is an ANSI string literal, L"Test_text" is a Unicode (UTF-16) string literal.
Since your are (implicitly, via Visual Studio default settings) doing a Unicode build, you should decorate your string literals with the L prefix, e.g.:
MessageBox(nullptr, // <--- prefer nullptr to NULL in modern C++ code
L"Test_text", // <--- Unicode (UTF-16) string literal
L"Message Test", // <--- Unicode (UTF-16) string literal
MB_ICONINFORMATION | MB_OKCANCEL);
An alternative is to decorate the string literals using the _T("...") or TEXT("...") macros. These will be expanded to simple "..." ANSI string literals in ANSI/MBCS builds, and to Unicode (UTF-16) string literals L"..." in Unicode builds (which are the default in modern versions of Visual Studio).
// TEXT("...") works in both ANSI/MBCS and Unicode builds
MessageBox(nullptr,
TEXT("Test_text"),
TEXT("Message Test"),
MB_ICONINFORMATION | MB_OKCANCEL);
Personally, I consider the TCHAR model an obsolete model from the past (I see no reason to produce ANSI builds of modern C++ Win32 applications), and considering that modern Windows APIs are Unicode-only (e.g. DrawThemeText()), I'd just decorate strings literals using the L"..." prefix, and kind of forget about the ANSI builds.
MessageBox in this case is actually MessageBoxW, it takes unicode strings. You can fix it in this way:
MessageBoxW(NULL, L"Test_text", L"Message Test", MB_ICONINFORMATION | MB_OKCANCEL);
or
MessageBox(NULL, TEXT("Test_text"), TEXT("Message Test"), MB_ICONINFORMATION | MB_OKCANCEL);
You can't pass a bare string literal like that.
MessageBox(NULL, TEXT("Test_text"), TEXT("Message Test"), MB_ICONINFORMATION | MB_OKCANCEL);
TEXT is a macro that expands to the right string type depending the way you compile.
You need to choose the correct setting for the Character Set for your project. In the properties of your Visual Studio Project, navigate to the General category. In there is an entry Character Set.
If you choose Unicode Character Set the compiler will define _UNICODE for you and all functions like MessageBox will evaluate to their wide-character variant, like MessageBoxW.
If you choose Multi-Byte (deprecated) the compiler will define _MBCS for you and the functions will evaluate to the multi-byte variants, like MessageBoxA.
The same comes in for strings, the macros mentioned in the answers (like TEXT) will add an L in front of all your strings in a unicode environment.
See here for more information: https://msdn.microsoft.com/en-us/library/ey142t48.aspx
IMHO there are very few reasons where one wants to write explicitly against the W or A methods. If you have to do that to make your compiler happy, you should recheck your settings.
Related
in the vc++ I have a solution with two projects. project A has a dllLoader.h and dllLoader.cpp which loads a dll with LoadLibrary and I need to call its functions in Project B. So I did Copy and Paste the header and cpp file to Project B.
Project A Main.cpp
------------------
#include "../Plugin/DllLoader.h"
#include "../Plugin/Types.h"
int main(){
std::string str("plugin.dll");
bool scuccessfulLoad = LoadDll(str);}
and here is the dllLoader in Project A (the mirror/copy in Project B get changed with changes here)
bool LoadDll(std::string FileName)
{
std::wstring wFileName = std::wstring(FileName.begin(), FileName.end());
HMODULE dllHandle1 = LoadLibrary(wFileName.c_str());
if (dllHandle1 != NULL)
{ ****
return TRUE;
}
Building the project itself does not show any error and get successfully done, but when I build the Solution (which contains other projects) I get the error
C2664 'HMODULE LoadLibraryA(LPCSTR)': cannot convert argument 1 from
'const _Elem *' to 'LPCSTR'
Your LoadDll() function takes a std::string as input, converts it (the wrong way 1) to std::wstring, and then passes that to LoadLibrary(). However, LoadLibrary() is not a real function, it is a preprocessor macro that expands to either LoadLibraryA() or LoadLibraryW() depending on whether your project is configured to map TCHAR to char for ANSI or wchar_t for UNICODE:
WINBASEAPI
__out_opt
HMODULE
WINAPI
LoadLibraryA(
__in LPCSTR lpLibFileName
);
WINBASEAPI
__out_opt
HMODULE
WINAPI
LoadLibraryW(
__in LPCWSTR lpLibFileName
);
#ifdef UNICODE
#define LoadLibrary LoadLibraryW
#else
#define LoadLibrary LoadLibraryA
#endif // !UNICODE
In your situation, the project that is failing to compile is configured for ANSI, thus the compiler error because you are passing a const wchar_t* to LoadLibraryA() where a const char* is expected instead.
The simplest solution is to just get rid of the conversion altogether and call LoadLibraryA() directly:
bool LoadDll(std::string FileName)
{
HMODULE dllHandle1 = LoadLibraryA(FileName.c_str());
...
}
If you still want to convert the std::string to std::wstring 1, then you should call LoadLibraryW() directly instead:
bool LoadDll(std::string FileName)
{
std::wstring wFileName = ...;
HMODULE dllHandle1 = LoadLibraryW(wFileName.c_str());
...
}
This way, your code always matches your data and is not dependent on any particular project configuration.
1: the correct way to convert a std::string to a std::wstring is to use a proper data conversion method, such as the Win32 MultiByteToWideChar() function, C++11's std::wstring_convert class, a 3rd party Unicode library, etc. Passing std::string iterators to std::wstring's constructor DOES NOT perform any conversions, it simply expands the char values as-is to wchar_t, thus any non-ASCII char values > 0x7F will NOT be converted to Unicode correctly (UTF-16 is Windows's native encoding for wchar_t strings). Only the 7-bit ASCII characters (0x00 - 0x7F) are the same values in ASCII, ANSI codepages, Unicode UTF encodings, etc. Higher-valued characters require conversion.
You pass a wide string to the function. So the code is clearly intended to be compiled targeting UNICODE, so that the LoadLibrary macro expands to LoadLibraryW. But the project in which the code fails does not target UNICODE. Hence the macro here expands to LoadLibraryA. And hence the compiler error because you are passing a wide string.
The problem therefore is that you have inconsistent compiler settings across different projects. Review the project configuration for the failing project to make sure that consistent conditionals are defined. That is, make sure that the required conditionals (presumably to enable UNICODE) are defined in all of the projects that contain this code.
I copied some code from a VC6 project to a vc++2010 project, but the code cannot be compiled.
the error is 'wsprintfW' : cannot convert parameter 1 from 'char *' to 'LPWSTR', problem code is:
inline bool RingCtrl::BuildPathAndName(char* pBuf, int bufSize, int8 priority, int idxNumber) const
{
wsprintf(pBuf, "%s\\R%u%06x.DAT", _directory, (int)priority, idxNumber);
return true;
}
the wsprintf is defined in wmcommn.h like below:
WINUSERAPI
int
WINAPIV
wsprintfA(
__out LPSTR,
__in __format_string LPCSTR,
...);
WINUSERAPI
int
WINAPIV
wsprintfW(
__out LPWSTR,
__in __format_string LPCWSTR,
...);
#ifdef UNICODE
#define wsprintf wsprintfW
#else
#define wsprintf wsprintfA
#endif
You are building your program with UNICODE defined (default in VC++2010), while it was not defined in VC6. When UNICODE is defined wsprintf takes wchar_t* instead of char* as a first parameter (and const wchar_t* instead of const char* as a second one).
Easy solution would be to explicitly call wsprintfA instead of wsprintf in RingCtrl::BuildPathAndName, but in this case you'll have a problems with Unicode file names. You'll probably have a lot of other similar errors connected to UNICODE and _UNICODE, so you may want to change your project settings in VC++2012: Project Properties->General->Character Set->Use Multi-Byte Character Set.
Correct solution would be to move from char* to wchar_t* (or even better to TCHAR*), but that would probably require a lot of effort.
In Dev-C++ when I compile my program with
LPCTSTR ClsName = L"BasicApp";
LPCTSTR WndName = L"A Simple Window";
the compilation breaks, but when I compile my program with
LPCTSTR ClsName = "BasicApp";
LPCTSTR WndName = "A Simple Window";
it succeeds; thus the question how to pass unicode-strings to Orwell Dev-C++ in a manner of the 'L' from VS++.
See Microsoft's documentation about Working with Strings
Very near the start of this you can read:
To declare a wide-character literal or a wide-character string literal, put L before the literal.
wchar_t a = L'a';
wchar_t *str = L"hello";
(This information is not Microsoft-specific. It echoes the C/C++ standards)
Then if you consult the documentation that you have cited in your comment
and find the entry for LPCTSTR you see that this macro is defined conditionally upon the value of UNICODE:
#ifdef UNICODE
typedef LPCWSTR LPCTSTR;
#else
typedef LPCSTR LPCTSTR;
#endif
The entry for LPCWSTR tells you it is defined:
typedef CONST WCHAR *LPCWSTR;
And the entry or LPCSTR tells you it is defined:
typedef __nullterminated CONST CHAR *LPCSTR;
You are building your project without UNICODE defined. Accordingly,
LPCTSTR ClsName = L"BasicApp";
becomes:
__nullterminated CONST CHAR * ClsName = L"BasicApp";
which, by the definitions mentioned, involves initializing a CONST CHAR * with an incompatible pointer type, wchar_t *. Likewise for WndName.
To rectify this error, you must add UNICODE to the preprocessor definitions of your project. In the Orwell Dev-C++ IDE, do this by navigating Project -> Project Options -> Parameters; enter -DUNICODE in the text box headed C++ compiler and OK out. A Visual Studio C/C++ project defines UNICODE by default. Orwell Dev-C++ does not.
If you want to write definitions of string literals that are portable between unicode and the ANSI multibyte character set, then Working with Strings tells you how: read the entry for TCHARS. The portable definitions of your string literals will be:
LPCTSTR ClsName = TEXT("BasicApp");
LPCTSTR WndName = TEXT("A Simple Window");
The error I get is:
"DWORD GetModuleFileNameW(HMODULE,LPWSTR,DWORD)' : cannot convert parameter 2 from 'char *' to 'LPWSTR"
On this line
GetModuleFileName(NULL, &strL[0], MAX_PATH);
This the code
BOOL APIENTRY DllMain(HMODULE hModule, DWORD fdwReason, LPVOID lpReserved)
{
switch (fdwReason)
{
case DLL_PROCESS_ATTACH:
{
std::string strL;
strL.resize(MAX_PATH);
GetModuleFileName(NULL, &strL[0], MAX_PATH);
DisableThreadLibraryCalls(hModule);
if(strL.find("notepad.exe") != std::string::npos)
{
gl_hThisInstance = hModule;
LoadOriginalDll();
}
break;
}
case DLL_PROCESS_DETACH:
{
ExitInstance();
break;
}
}
return TRUE;
}
From MSDN,
typedef wchar_t* LPWSTR, *PWSTR;
So it is expecting a wchar_t * (wchar_t is 2 bytes or more), but &std::string[0] is a char* (char is a byte). You need to use std::wstring instead:
std::wstring strL;
If you want your code to compile without using wide strings, refer to here:
How do I turn off Unicode in a VC++ project?
Chances are if Unicode is enabled in VC++, there are defines like this:
#ifdef UNICODE
#define CreateFile CreateFileW
#else
#define CreateFile CreateFileA
#endif // !UNICODE
Fix:
Have you tried: Project Properties - General - Project Defaults -
Character Set?
See answers in this question for the differences between "Use
Multi-Byte Character Set" and "Not Set" options: About the "Character
set" option in visual studio 2010
And from the link inside the quote:
It is a compatibility setting, intended for legacy code that was
written for old versions of Windows that were not Unicode enabled.
Versions in the Windows 9x family, Windows ME was the last one. With
"Not Set" or "Use Multi-Byte Character Set" selected, all Windows API
functions that take a string as an argument are redefined to a little
compatibility helper function that translates char* strings to
wchar_t* strings, the API's native string type.
I am trying to convert const char * to LPTSTR. But i do not want to use USES_CONVERSION to perform that.
The following is the code i used to convert using USES_CONVERSION. Is there a way to convert using sprintf or tcscpy, etc..?
USES_CONVERSION;
jstring JavaStringVal = (some value passed from other function);
const char *constCharStr = env->GetStringUTFChars(JavaStringVal, 0);
LPTSTR lpwstrVal = CA2T(constCharStr); //I do not want to use the function CA2T..
LPTSTR has two modes:
An LPWSTR if UNICODE is defined, an LPSTR otherwise.
#ifdef UNICODE
typedef LPWSTR LPTSTR;
#else
typedef LPSTR LPTSTR;
#endif
or by the other way:
LPTSTR is wchar_t* or char* depending on _UNICODE
if your LPTSTR is non-unicode:
according to MSDN Full MS-DTYP IDL documentation, LPSTR is a typedef of char *:
typedef char* PSTR, *LPSTR;
so you can try this:
const char *ch = "some chars ...";
LPSTR lpstr = const_cast<LPSTR>(ch);
USES_CONVERSION and related macros are the easiest way to do it. Why not use them? But you can always just check whether the UNICODE or _UNICODE macros are defined. If neither of them is defined, no conversion is necessary. If one of them is defined, you can use MultiByteToWideChar to perform the conversion.
Actually that's a silly thing to do. JNIEnv alreay has a method to get the characters as Unicode: JNIEnv::GetStringChars. So just check for the UNICODE and _UNICODE macros to find out which method to use:
#if defined(UNICODE) || defined(_UNICODE)
LPTSTR lpszVal = env->GetStringChars(JavaStringVal, 0);
#else
LPTSTR lpszVal = env->GetStringUTFChars(JavaStringVal, 0);
#endif
In fact, unless you want to pass the string to a method that expects a LPTSTR, you should just use the Unicode version only. Java strings are stored as Unicode internally, so you won't get the overhead of the conversion, plus Unicode strings are just better in general.