using unicode dlls in MBCS projects or vice versa - c++

I have a vc++ dll that is compiled with charset set to 'Use Unicode Character Set'.
Now i want to use this dll in my vc++ exe whose charset is 'Use Multi-Byte Character Set'. I know that theoretically nothing stops me from doing this as after compiling the vc++ dll, all the function signatures would be either wchar_t or LPCWSTRs .. And in my vc++ exe i just create strings of that format and call the exported functions. But, The problem i am facing is , The header in unicode dll takes TCHAR parameters like for ex:
class MYLIBRARY_EXPORT PrintableInt
{
public:
PrintableInt(int value);
...
void PrintString(TCHAR* somestr);
private:
int m_val;
};
Now i want to use this PrintString() in my exe. So, I included the header and used it like below:
#include "unicodeDllHeaders.h"
PrintableInt p(2);
wchar_t* someStr = L"Some str";
p.PrintString(someStr);
This as expected gives me a compiler error :
error C2664: 'PrintableInt::PrintString' : cannot convert parameter 1 from 'wchar_t *' to 'TCHAR *'
As the exe is built with MBCS TCHAR is defined to char . So, what i thought would solve this issue is :
#define _UNICODE
#include "unicodeDllHeaders.h"
#undef _UNICODE
But after defining _UNICODE also i still get the compilation error. So, my next guess was that probably TCHAR.h was already included before the #include "unicodeDllHeaders.h" , when i searched for the inclusion of TCHAR.h it was there else where in the project. So, I moved the inclusion to after definition of _UNICODE , This solved the compilation error here but it is failing in the other places where TCHAR is expected to be made char . So, My question is :
Can i somehow make TCHAR resolve to char and wchar_t in the same project ? I tried #define TCHAR char, #undef TCHAR , #define TCHAR wchar_t but its failing in the c headers like xutils

You can not retroactively change the binary interface of your DLL, regardless of what do to your header file by using macros. The typical solution is to firstly dump the whole legacy MBCS stuff. This stopped being interesting 10 years ago, nowadays all targets supporting the win32 API support the wide character interface with full Unicode support.
If you want that retro feeling of the nineties, you can also compile your DLL twice, once with CHAR and once with WCHAR. The latter then typically get a "u" suffix (for Unicode). In your header, you then check the charset and delegate to the according DLL using #pragma comment lib...

Related

How to write a portable c++ code with unicode support?

I have the below program, which tries to print Unicode characters on a console by enabling the _O_U16TEXT mode for the console:
#include <iostream>
#include <fcntl.h>
#include <io.h>
int main()
{
_setmode(_fileno(stdout), _O_U16TEXT);
wprintf(L"test\n \x263a\x263b Hello from C/C++\n");
return 0;
}
What is unclear to me is that, I have seen many C++ projects (with the same code running on Windows and Linux) and using a macro called _UNICODE. I have the following questions:
Under what circumstance do I need to define the _UNICODE macro?
Does enabling the _UNICODE macro mean I need to separate the ASCII related code by using #ifdef _UNICODE? In case of the above program, do I need to put any #ifdef UNICODE and the ASCII code processing in #else?
What extra do I need to enable the code to have Unicode support in C/C++? Do I need to link to any specific libraries on Windows and Linux?
Why does my sample program above not need to define the _UNICODE macro?
When I define the _UNICODE macro, how does the code know whether it uses UTF-8, UTF-16 or UTF-32? How do I decide between these Unicode types?

Why #define UNICODE has no effect in windows

I have the following code:
#define UNICODE
// so strange??
GetModuleBaseName( hProcess, hMod, szProcessName,
sizeof(szProcessName)/sizeof(TCHAR) );
But the compiler still report error like this:
error C2664: “DWORD K32GetModuleBaseNameA(HANDLE,HMODULE,LPSTR,DWORD)”: 无法将参数 3 从“wchar_t [260]”转换为“LPSTR” [E:\source\mh-gui\build\src\mhgui.vcxproj]
Which means cant convert param 3 from wchar_t[260] to LPSTR. It's look's like that still looking for A version api?
You must put
#define UNICODE
#define _UNICODE
BEFORE
#include <Windows.h>
The Windows header uses #ifdef UNICODE (et al), so if you want to make the distinction count, the #defines must occur before the #include.
edit: Because these #defines are functionally global, the most reliable place to add them is in your compiler options, so the ordering doesn't matter then.
Since you are using visual studio, instead of defining UNICODE yourself you should enable the W version by right clicking the project in the solution explorer -> properties -> Advanced -> Change the "Character Set" option to "Use Unicode Character Set"

UNICODE always defined in Visual Studio 2013?

I use Visual Studio 2013 and I set my project's Character Set to Use Multi-Byte Character Set but still if I write
LPTSTR foo = new TCHAR[1000];
foo becomes a wchar_t*. On https://msdn.microsoft.com/en-us/library/windows/desktop/aa383751%28v=vs.85%29.aspx it says that
#ifdef UNICODE
typedef LPWSTR LPTSTR;
#else
typedef LPSTR LPTSTR;
#endif
hence I would suspect that since I do not use UNICODE foo would be a char*
Have I misunderstood anything conceptual, or something in the studio?
Thanks in advance!

How to force the visual studio to use the wmain instead of main

I'm in need of parsing unicode parameters, so I wanted to use the wmain instead.
So instead of
int main(int argc, char** argv)
I would like to use
int wmain(int argc, wchar_t** argv)
The problem is that the visual studio is not recognizing the wmain, and it is trying to use main instead:
error LNK2019: unresolved external symbol main referenced in function __tmainCRTStartup
This is what I tried:
Changing the Properties->General->Character set
Changing the Entry point (In this case I got lot of compatibility errors with libraries that don't even have entry point, so it can't be specified there).
warning LNK4258: directive '/ENTRY:mainCRTStartup' not compatible with switch '/ENTRY:mainWCRTStartup'; ignored
Tried the _tmain instead, just to find out it is just a macro that changes it to main.
Using the #pragma comment(linker, "/SUBSYSTEM:CONSOLE /ENTRY:mainCRTStartup")
Using the UNICODE macro
Nothing helps.
Edit:
I would like to mention, that I'm using the vs120_xp (Win xp compatibile) toolset, but when I tried to use the default one, it still didn't work.
Edit2:
I tried to make brand new project, and the wmain worked there out of the box. I didn't have to change anything, so it have to be some specific setting in the current project that is causing it.
Using the #pragma comment(linker, "/SUBSYSTEM:CONSOLE /ENTRY:mainCRTStartup")
You are getting close, not quite close enough. The CRT has four entrypoints:
mainCRTStartup => calls main(), the entrypoint for console mode apps
wmainCRTStartup => calls wmain(), as above but the Unicode version
WinMainCRTStartup => calls WinMain(), the entrypoint for native Windows apps
wWinMainCRTStartup => calls wWinMain(), as above but the Unicode version
So it is /ENTRY:wmainCRTStartup
Do beware that the command line arguments are converted to Unicode assuming the default console code page. Which is a bit unpredictable, it is the legacy 437 OEM code page only in Western Europe and the Americas. The user might need to use the CHCP command (CHange Code Page) and tinker with the console window font to keep you happy. YMMV.
I was merging the differences between new project that was working properly and our project for hours until I found out that the problem is caused by our misconfiguration of the allegro preprocesor definitions.
Deep down in the allegro library in win/alconfig.h, there are these lines
#ifndef ALLEGRO_NO_MAGIC_MAIN
#if defined _MSC_VER && !defined ALLEGRO_LIB_BUILD
#pragma comment(linker,"/ENTRY:mainCRTStartup")
#endif
#endif
We did setup this macro for the allegro library compilation, but the allegro file was also included from the main project, that didn't specify this one.
Defining the macro in the main project fixed the problem (obviously).
I didn't really see this comming!

Compiling the MongoDB c++ driver without _UNICODE

I'm trying to compile the MongoDB c++ driver into my project and I've run across an interesting error.
in util/text.h, you can find this code:
/* like toWideString but UNICODE macro sensitive */
# if !defined(_UNICODE)
#error temp error
inline std::string toNativeString(const char *s) { return s; }
# else
inline std::wstring toNativeString(const char *s) { return toWideString(s); }
# endif
It looks like you should be able to compile it without the _UNICODE define, yet there is this seemingly arbitrary line #error temp error which causes the failure. On Github, this seems to have been the case for the lifetime of the file. Does anyone know if it's safe to remove it?
Unfortunately I can't just compile this project in unicode because there are a number of unicode incompatible sources in the project as well.
Cheers
Kyle