Recently I found that passing/exposing objects to/from dll is dangerous if executable and dll was compiled with different compilers or even with different version of the same compiler or even with different settings of the same compiler! To say I was surprised is to say nothing - all this time I thought I can create a dynamic library and safely distribute it.
Now having this knowledge I wonder how certain libraries work? Let's take a dll in system32 folder. With dependency Walker I can see that in, say, d3d11.dll functions exposed with C parameter - I guess this is C convention which gives us a guarantee it will work. Also I know that d3d11 works with COM objects, so there's no risk of creating an object in dll and destroying it in exe.
But now let's take any Qt dll - the functions exported as c++ and have all their names mangled. I remember when I installed the framework I specified my compiler - MSVC2013, but I didn't specified a version! I can use a dll compiled with update1 compiler in update5 compiler and I can't remember any strange behavior. How it works then? How they handled mangled function name resolve, parameters passing?
The usual reason that a library under windows has to be compiled with the same compiler is that the C++ standard library is not ABI compatible between different version of MSVC++. If a library uses the C++ Standard library across the dll boundary this is ok as long as the C++ standard library is identical in the library and the compiler that uses the library.
Qt expose there own strings (QString) and vectors(QVector) etc, instead of std::string and std::vector, therefore Qt doesn't expose the C++ standard library through the dll boundary, hence the Qt dll works with all versions of Visual Studio.
Related
Qt library states binary compatibility across major releases. But what about compatibility between different compilers?
For example, I have an application and 2 dlls that use Qt (dynamically linked). But one of them is built with MSVC and the other with MinGW. Thus they are linked to different builds of Qt library (of the same version).
The question is: is it possible for these dlls to work together using one shared Qt dll?
If not then what workarounds are possible considering that changing the compiler is not an option?
I looked at Qt dlls with dependency walker and I see that there are dozens of exported functions which have compiler-scpecific name mangling. So it seems that it is not possible to make them work together.
C++ does not have standard ABI. That means, binaries including DLL files aren't usually compatible between compilers (they may not even be compatible between different build options on same compiler...). This is is the case with MSVC vs. MinGW, no DLL compatibility.
Plain C libraries have a defined ABI on Windows, so they may be used with any compiler (you may still run into problems with libraries having incompatible dependencies, even if ABI is compatible).
If recompiling one library is not an option, then the only workaround I know is to have separate processes, which then communicate using some IPC mechanism. If both libraries offer GUI elements, and you want to mix them in same GUI, that's not possible (or at least it is hard, you need to open window of other app in the window of another, sort of like an overlay).
I have a question related to embedding one library in another.
I have a code that is pure C and my users rely on that, they don't want to depend on C++ libraries. However, the need arose to embed a 3rd party library (ICU) into mine. None of the ICU functions would be exported, they would only be used internally in my library. Unfortunately ICU is a C++ library, though it does have a C wrapper. ICU does not use exceptions, but it does use RTTI (abstract base classes).
The question is how could I create my static library so that
ICU is embedded in my library (all references to ICU functions are resolved within my library)
all references to libstdc++ are also resolved and the necessary code is embedded into my library
if a user does not even have libstdc++ installed on their system things work just fine
if a user does happen to use my library within a C++ project then there are no conflicts with whatever libstdc++ (presumably the system libstdc++) he uses.
Is this possible at all? The targeted platforms are pretty much everything: windows (there my library is dynamic), and all sort of unix versions (linux, solaris, aix, hpux - here my library needs to be static).
gcc-4.5 and later does have --static-libstdc++, but as far as I understand it is only for creating shared libs or executables, and not static libs.
Thanks for any help!
The solution to this problem is pretty simple, but may not fall inside the parameters you have set.
The simple rules are:
You can dynamically link a C++ library to a C caller, but you have to wrap the library inside an extern C layer. You design the extern C API and implement it using C++ internals, which are forever hidden from view. [You can also use COM or .NET to do this on Windows.]
You cannot statically link a C++ library to a C caller. The libraries are different and the calling sequences/linker symbols are different. [You often can't even statically link between different versions of the same compiler, and almost never between different compilers.]
In other words, the solution is simple: use dynamic linking. If that's not the right answer, then I don't think there is one.
Just to make things interesting, you can even implement your own plug-in architecture. That's just another name for dynamic linking, but you get to choose the API.
Just to be clear, the only viable portable option I can see is that you link ICU inside its own dynamic library (DLL or SO). Its symbols, C++ libs, RTTI and exceptions all stay inside that. Your static lib links to the ICU dynamic lib by extern C. That's exactly how much of Windows is built: C++ inside DLL, extern C.
You can debug across the boundary, but you cannot export type information. If you need to do that, you will have to use a different API, such as .NET or COM.
I don't know if this will work, but let me at least suggest that you try it!
The excellent LLVM project (origin of clang compiler) has many front-ends and back-ends for different languages, such as C++ and C. And according to this S.O. question it should be possible for LLVM to compile C++ into C which in turn can be compiled as normal.
I imagine this route is a bumpy one, but if it works, it might solve your problem without the need to link dynamically. It all depends on if ICU will compile with LLVM C++.
If you do decide to give it a go, please let us know how you fare!
The question says it all.
I understand that VC11 is currently only in beta, but what I'm asking is:
experience with trying to link with a closed source (widely used if possible) library compiled with vc10
specifications from Microsoft saying explicitely if yes or no the vc11 will be able to link with vc10 libraries.
I'm talking about C++ case only.
You may want to read this answer for the case of dynamic linking.
Regarding static linking, I think you can't safely link C++ libraries written with VCx with code compiled with VCy. For example, STL containers implementations change from version to version (and even within the same version, there are changes between debug and release mode, and settings like _HAS_ITERATOR_DEBUGGING, etc.).
Quoting VC++ STL maintainer:
The STL never has and never will guarantee binary compatibility
between different major versions. We're enforcing this with linker
errors when mixing object files/static libraries compiled with
different major versions that are both VC10+ [...]
That's a resounding no! Every major release of VS has a new version of the dynamic CRT, names are msvcr90.dll for VS2008, msvcr100.dll for VS2010, msvcr110.dll for VS11.
Using the dynamic CRT (/MD compile option) is important when you return C++ objects like std::string from an exported function, or otherwise return any pointer that needs to be deleted by the client code. That can only work properly when the client code is using the exact same version of the CRT as the DLL. Implicit is that this won't be the case when these chunks of code each have their own dependency on a msvcrXXX.dll version, they'll inevitably have incompatible CRT versions that don't share the same heap allocator.
You can write DLLs that are safe to use with any CRT version but that requires carefully crafting the API so that these dependencies do not exist. The COM Automation model is an example of that.
For dynamic libraries, there should be no problem, as they follow well-defined ABIs. You can link to dll's from any compiler, any time.
Static libraries are trickier. As far as I know, Microsoft has never guaranteed cross-compiler compatibility for those. In particular, features such as link-time code generation have been known to break compatibility between earlier releases. .lib files do not have a single well-defined format like DLLs do.
It might work, because Microsoft rarely breaks compatibility unless they have to, but as far as I know, it is not guaranteed.
Of course, if the actual functions and types exposed by the DLLs don't match up, you'll run into problems.
In VC11, the sizes of almost all standard library data structures have been changed (Microsoft finally employs the empty base class optimization, effectively reducing the size of all containers which use the default allocator.), so trying to pass a std::string from a DLL compiled with VC10 into a module compiled by VC11 will certainly break.
I don't see any reason why they could be incompatible. No matter what C++ compiler you have used to produce LIB files as soon as they follow format specification. You could check this question if you are interested in details of format.
I have an old library (in C++) that I'm only able to build on MinGW+MSYS32 on Windows. From that I can produce a .a static library file generated from GNU libtool. My primary development is done in Visual Studio 2008. If I try to take the MinGW produced library and link it in Visual Studio, it complains about missing externals. This is most likely due to the C++ symbol mangling that is done, and that it doesn't match whats in the .a file. Is there any know way to convert a static .a file to VC++ library format?
If symbols are mangled differently, the compilers are using a different ABI and you won't be able to "convert" or whatever the compiled libraries: the reason names are mangled in different ways is intentional to avoid users from ever succeeding with building an executable from object files using different ABIs. The ABI defines how arguments are passed, how virtual function tables are represented, how exceptions are thrown, the size of basic data types, and quite a number of other things. The name mangling is also part of this and carefully chosen to be efficient and different from other ABIs on the same platform. The easiest way to see if the name mangling is indeed different is to compiler a file with a couple of functions using only basic types with both compilers and have a look at the symbols (on UNIX I would use nm for this; I'm not a Windows programmer and thus I don't know what tool to use there).
Based on Wikipedia it seems that MinGW also uses a different run-time library: the layout and the implementation of the standard C++ library used by MinGW (libstdc++) and Visual Studio (Dinkumware) is not compatible. Even if the ABI between the two compilers is the same, the standard C++ library used isn't. If the ABI is the same you would need to replace set things up so that one compiler uses the standard library of the respective other compiler. There is no way to do this with the already compiled library. I don't know if this is doable.
Personally I wouldn't bother with trying to link things between these two different compiler because it is bound not to work at all. Instead, you'd need to port the implementation to a common compiler and go from there.
Search for the def file and run the command e.g. lib /machine:i386 /def:test.def it will generate
an import lib file.
Over the months I've written some nice generic enough functionality that I want to build as a library and link dynamically against rather than importing 50-odd header/source files.
The project is maintained in Xcode and Dev-C++ (I do understand that I might have to go command line to do what I want) and have to link against OpenGL and SDL (dynamically in SDL's case). Target platforms are Windows and OS X.
What am I looking at at all?
What will be the entry point of my
library if it needs one?
What do I have to change in my code?
(calling conventions?)
How do I release it? My understanding
is that headers and the compiled
library (.dll, .dylib(, .framework),
whatever it'll be) need to be
available for the project -
especially as template functionality
can not be included in the library by
nature.
What else I need to be aware of?
I'd recommend building as a statc library rather than a DLL. A lot of the issues of exporting C++ functions and classes go away if you do this, provided you only intend to link with code produced by the same compiler you built the library with.
Building a static library is very easy as it is just an collection of .o/.obj files - a bit like a ZIP file but without compression. There is no need to export anything - just include the library in the list of files that your application links with. To access specific functions or classes, just include the relevant header file. Note you can't get rid of header files - the C++ compilation model, particularly for templates, depends on them.
It can be problematic to export a C++ class library from a dynamic library, but it is possible.
You need to mark each function to be exported from the DLL (syntax depends on the compiler). I'm poking around to see if I can find how to do this from xcode. In VC it's __declspec(dllexport) and in CodeWarrior it's #pragma export on/#pragma export off.
This is perfectly reasonable if you are only using your binary in-house. However, one issue is that C++ methods are named differently by different compilers. This means that nobody who uses a different compiler will be able to use your DLL, unless you are only exporting C functions.
Also, you need to make sure the calling conventions match in the DLL and the DLL's client. This either means you should have the same default calling convention flag passed to the compiler for both the DLL or the client, or better, explicitly set the calling convention on each exported function in the DLL, so that it won't matter what the default is for the client.
This article explains the naming issue:
http://en.wikipedia.org/wiki/Name_decoration
The C++ standard doesn't define a standard ABI, and that's bad news for people trying to build C++ libraries. This means that you get different behavior from your compiled code depending on which flags were used to compile it, and that can lead to mysterious bugs in code that compiles and links just fine.
This extends beyond just different calling conventions - C++ code can be compiled to support or not support RTTI, exception handling, and with various optimizations that can affect the the memory layout of class instances, which C++ code relies on.
So, what can you do? I would build C++ libraries inside my source tree, and make sure that they're built as part of my project's build, and that all the libraries and the code that links to them use the same compiler flags.
Note that name mangling, which was supposed to at least prevent you from linking object files that were compiled with different compilers/compiler flags only mostly works, and there are certain things you can do, especially with GCC, that will result in code that links just fine and fails at runtime.
You have to be extra careful with vendor supplied dynamic C++ libraries (QT on most Linux distributions, for example.) I've seen instances of vendor supplied libraries that were compiled in ways that prevented certain things from working properly. For example, some Redhat Linux releases (maybe all of them) disabled exceptions in QT, which made it impossible to catch exceptions in main() if the exceptions were thrown in a QT callback. Fun.