I have searched the net without any conclusive answers to issue related to lack of C++ ABI when it comes to exporting c++ classes accross dll boundaries in windows.
I can use extern c and provide a c like api to a library , but I would like end users to be able to use classes that us stl containers.
What generic patterns do you use for exporting a class that uses stl container across dll boundaries , in a safe manner? e.g. best practice.
This question is for experienced library authors.
Regards
Niladri
There's no defined C++ ABI and it does differ between compilers in terms of memory layout, name mangling, RTL etc.
It gets worse than that though. If you are targeting MSVC compiler for example, your dll/exe can be built with different macros that configure the STL to not include iterator checks so that it is faster. This modifies the layout of STL classes and you end up breaking the One Definition Rule (ODR) when you link (but the link still succeeds). If the ODR is violated then your program will crash seemingly randomly.
I would recommend maybe reading Imperfect C++ which has a chapter on the subject of C++ ABIs.
The upshot is that either:
you compile the dll specifically for the target compiler that is going to link to it and name the dll's appropriately (like boost does it). In this case you can't avoid ODR violations so the user of the library must be able to recompile the library themselves with different options.
Provide a portable C API and provide C++ wrapper classes for convenience that are compiled on the client side of the API. This is time consuming but portable.
Take a look at CppComponents https://github.com/jbandela/cppcomponents
You can export classes and interfaces across dll boundaries even across different compilers.
The library is header-only so nothing to build. To use it, you will need MSVC 2013 and/or GCC 4.8 as it uses C++11 variadic templates extensively.
See presentation from C++Now
https://github.com/boostcon/cppnow_presentations_2014/blob/master/files/cppnow2014_bandela_presentation.pdf?raw=true
https://github.com/jbandela/cppcomponents_cppnow_examples has the examples from the presentation.
Related
I'm trying to understand the correct way, or right approach, to provide a reasonably large C++ API for a non-open source project. I do not want to provide a "header only" library as the code base is fairly large and meant to be closed source. The goals are as follows:
Provide a native C++ API that users may instantiate C++ classes, pass data around, all in C++, without a C-only wraper
Allow methods to take as parameters and return C++ objects, especially STL types (std::string, std::vector, etc)
No custom allocators
Most industry standard / canonical method of doing this possible, if such a standard exists
No re-creating COM or using MS COM
Assume all C++ compilers are at least C++11 "compliant"
I am targeting Windows as well as other platforms (Linux).
My understanding is creating a DLL or shared library is out of the question because of the DLL boundary issue. To deal with DLL boundary issue, the DLL and the calling code must be compiled with dynamically linked runtime (and the right version, multithreaded/debug/etc on Windows), and then all compiler settings would need to match with respect to debug symbols (iterator debugging settings, etc). One question I have this is whether, if, say on Windows, we ensure that the compiler settings match in terms of /MD using either default "Debug" and "Releas" settings in Visual Studio, can we really be "safe" in using the DLL in this way (that is passing STL objects back and forth and various things that would certainly be dangerous/fail if there was a mismatch)? Do shared object, *.so in Linux under gcc have the same problem?
Does using static libraries solve this problem? How much do compiler settings need to match between a static library and calling code for which it is linked? Is it nearly the same problem as the DLL (on Windows)?
I have tried to find examples of libraries online but cannot find much guidance on this. Many resources discuss Open Source solution, which seems to be copying header and implementation files into the code base (for non-header-only), which does not work for closed source.
What's the right way to do this? It seems like it should be a common issue; although I wonder if most commercial vendors just use C interfaces.
I am ok with static libraries if that solves the problem. I could also buy into the idea of having a set of X compilers with Y variations of settings (where X and Y are pre-determined list of options to support) and having a build system that generated X * Y shared binary libraries, if that was "safe".
Is the answer is really only to do either C interfaces or create Pure Abstract interfaces with factories? (if so, is there a canonical book or guide for doing this write, that is not implementing Microsoft COM?).
I am aware of Stefanus DuToit's Hourglass Pattern:
https://www.youtube.com/watch?v=PVYdHDm0q6Y
I worry that it is a lot of code duplication.
I'm not complaining about the state of things, I just want to understand the "right" way and hopefully this will serve as a good question for others in similar position.
I have reviewed these Stackoverflow references:
When to use dynamic vs. static libraries
How do I safely pass objects, especially STL objects, to and from a DLL?
Distributing Windows C++ library: how to decide whether to create static or dynamic library?
Static library API question (std::string vs. char*)
Easy way to guarantee binary compatibility for C++ library, C linkage?
Also have reviewed:
https://www.acodersjourney.com/cplusplus-static-vs-dynamic-libraries/
https://blogs.msmvps.com/gdicanio/2016/07/11/the-perils-of-c-interface-dlls/
Given your requirements, you'll need a static library (e.g. .lib under windows) that is linked into your program during the build phase.
The interface to that library will need to be in a header file that declares types and functions.
You might choose to distribute as a set of libraries and header files, if they can be cleanly broken into separate pieces - if you manage the dependencies between the pieces. That's optional.
You won't be able to define your own templated functions or classes, since (with most compilers) that requires distributing source for those functions or classes. Or, if you do, you won't be able to use them as part of the interface to the library (e.g. you might use templated functions/classes internally within the library, but not expose them in the library header file to users of the library).
The main down side is that you will need to build and distribute a version of the library for each compiler and host system (in combination that you support). The C++ standard specifically encourages various types of incompatibilities (e.g. different name mangling) between compilers so, generally speaking, code built with one compiler will not interoperate with code built with another C++ compiler. Practically, your library will not work with a compiler other than the one it is built with, unless those compilers are specifically implemented (e.g. by agreement by both vendors) to be compatible. If you support a compiler that supports a different ABI between versions (e.g. g++) then you'll need to distribute a version of your library built with each ABI version (e.g. the most recent version of the compiler that supports each ABI)
An upside - which follows from having a build of your library for every compiler and host you support - is that there will be no problem using the standard library, including templated types or functions in the standard library. Passing arguments will work. Standard library types can be member of your classes. This is simply because the library and the program using it will be built with the same compiler.
You will need to be rigorous in including standard headers correctly (e.g. not relying on one standard header including another, unless the standard says it actually does - some library vendors are less than rigorous about this, which can cause your code to break when built with different compilers and their standard libraries).
There will mostly be no need for "Debug" and "Release" versions (or versions with other optimisation settings) of your library. Generally speaking, there is no problem with having parts of a program being linked that are compiled with different optimisation settings. (It is possible to cause such things to break,if you - or a programmer using your library in their program - use exotic combinations of build options, so keep those to a minimum). Distributing a "Debug" version of your library will permit stepping through your library with a debugger, which seems counter to your wishes.
None of the above prevents you using custom allocators, but doesn't require it either.
You will not need to recreate COM unless you really want to. In fact, you should aim to ensure your code is as standard as possible - minimise use of compiler-specific features, don't make particular assumptions about sizes of types or layout of types, etc. Using vendor specific features like COM is a no-no, unless those features are supported uniformly by all target compilers and systems you support. Which COM is not.
I don't know if there is a "standard/canonical" way of doing this. Writing code that works for multiple compilers and host systems is a non-trivial task, because there are variations between how different compiler vendors interpret or support the standard. Keeping your code simple is best - the more exotic or recent the language or standard library feature you use, the more likely you are to encounter bugs in some compilers.
Also, take the time to set up a test suite for your library, and maintain it regularly, to be as complete as possible. Test your library with that suite on every combination of compiler/system you support.
Provide a native C++ API that users may instantiate C++ classes, pass data around, all in C++, without a C-only wraper
This excludes COM.
Allow methods to take as parameters and return C++ objects, especially STL types (std::string, std::vector, etc)
This excludes DLLs
Most industry standard / canonical method of doing this possible, if such a standard exists
Not something "standard", but common practises are there. For example, in DLL, pass only raw C stuff.
No re-creating COM or using MS COM
This requires DLL/COM servers
Assume all C++ compilers are at least C++11 "compliant"
Perhaps. Normally yes.
Generally: If source is to be available, use header only (if templates) or h +cpp.
If no source, the best is DLL. Static libraries - you have to build for many compilers and one has to carry on your lib everywhere and link to it.
On linux, gcc uses libstdc++ for the c++ standard library while clang can use libc++ or libstdc++. If your users are building with clang and libc++, they won't be able to link if you only build with gcc and libstdc++ (libc++ and libstdc++ are not binary compatible). So if you wan't to get target both you need two version of you library, one for libstdc++, and another for libc++.
Also, binaries on linux (executable, static library, dynamic library) are not binary compatible between distros. It might work on your distro and not work on someone else distro or even a different version of your distro. Be super careful to test that it work on whichever distro you want to target. Holy Build Box could help you to produce cross-distribution binaries. I heard good thing about it, just never tried it. Worst case you might need to build on all the linux distro you want to support.
https://phusion.github.io/holy-build-box/
This question already has answers here:
Where to get the source code for the C++ standard library? [closed]
(10 answers)
Closed 7 years ago.
I'm curious about learning how certain C++ features work. I'm trying to learn C++11 concepts such as std::function, but I keep hitting walls like INVOKE(function, arguments, return) that I don't understand. People tell me, "Oh, just ignore it and use auto" but I want to have a truly deep understanding of how C++ and its standard library works, so I wanted to find the source code of the standard library.
I would guess that the C++ Standard Library is somewhat related with the C Standard Library and the messy assembly/binary implementations at the lowest level for stuff like std::iostream and such, but I'm interested in more higher-level abstractions like smart pointers and std::function. Given that many of the C++11 libraries were once Boost ones, how might I find the source for C++ Standard Library implementations?
The two most popular open source implementations of standard C++ library are:
libstdc++, part of gcc project
libc++, part of LLVM project
Both websites contain links to git/svn repositories with source code.
You might dive into the source code of libstdc++ if you care about GCC. Indeed it sometimes leverages above the standard C library (e.g. ::operator new might call malloc, etc...)
Notice that since the C++ library is part of the standard, some of it might be implemented in a compiler specific way.
In principle, nothing requires standard headers to be real operating-system files; a compiler could parse #include <vector> in a particular way not involving any file. I know no compiler doing that much!
In particular, libstdc++ is using some GCC builtins and some GCC attributes (which happens to be also understood by Clang/LLVM).
And some standard types require (or profit from) internal support in the compiler. For example, GCC has some specific code to deal with va_list and with std::initializer_list (and of course basic types like int...), etc. Also, the compiler's implementation of C++ closures (or lambda functions) is related to other classes, etc...
Also, some optimization passes of GCC (it has several hundreds of them) are designed with some features of libstdc++ implementation in mind.
BTW, using a recent gdb debugger with e.g. the libstdc++6-4.9-dbg debian package might also help.
I have a library I'm building which is targeted to be a DLL that is linked into the main solution.
This new DLL is quite complex and I'd like to make use of C++11 features, while the program that will link it most certainly does not. In fact, the main program is currently "cleanly" built using VS2008 and VS2010 (and i think GCC 4.3 for linux?).
What I propose:
Using VS2012 as the IDE and Intel C++ Compiler 2013 for compilation to .dll/.so - for linux - which, as I understand, is basically down to machine form (like an .exe).
While I'm familiar with using C++ to solve problems, I am not fluent in the fundamentals of compilation/linking, etc. Therefore, I'd like to ask the community if
This is possible
If it is possible, how easy is it (as simple as I described?) / what pitfalls or issues can I expect along the way (is it worth it)?
Areas of concern I anticipate:
runtime libraries - I expect this to be the factor that derails this effort. I know nothing about them/how they work except that they might be a problem.
Standard Library implementation differences - should it matter if it's down to DLL form?
threading conflicts - the dll threads and the main programs threads never modify the same data, and actually one of the main program's threads will call the DLL functions.
Bonus: While the above is the route I expect to take, I'd ideally like to have this code open for intellisense, general viewing, etc (essentially for it to become a project in the main solution). Is there a way to specify different runtime libraries/compiler? Can this be done?
EDIT: The main reason for this bonus part is to eliminate the necessary "versioning" conflicts that will arise if the main program and this library are built separately.
NOTE: I'm not using C++11 just for the sake of being newer - strongly typed enums and cross-platform threading code will be huge bonuses for the library.
The question isn't so much "Can an application use a library built with a different compiler ?" (The answer is yes.) but "What C++ features can be used in the public interface of a library built with another compiler and C++ standard library?"
On Windows, the answer is "almost none". Interfaces (classes containing only virtual functions) are about it. No classes with data members. No exceptions. No runtime objects (like iostream instances or strings). No templates.
On Linux, the answer is "lots more but still not many". Classes are ok, as long as the ODR is satisfied. Exceptions will work. Templates too, as long as the definition is exactly the same on both sides. But definitions of standard library types did change between C++03 and C++11, so you won't for example be able to pass std::string or std::vector<int> objects between the application and library (both sides can use these features, but the same object can't cross over).
I'm afraid this is not possible with C++. Especially name mangling can be different. All C++ files linked together need to be compiled with same compiler.
In C++, extern "C" stuff is standard (naming, calling convention), so C libraries can be called from C++, as well as C++ functions declared withing extern "C" block. This exludes classes, templates, overloads, mixing them compiled by different compilers is not workable.
Which is a pity.
I know that this question has been asked previously, but before you give me a minus and report repeated question, ponder a while on this:
In all previous answers everybody says that object memory layout is compiler dependent. How is it then, that shared libraries (*.dll, *.so) can export and import c++ classes, and they can definitely be combined even if coming from different compilers? Consider a DirectX application written under mingw. DirectX was compiled using MSVC++, so how do those environments agree on memory layout? I know that DirectX relies heavily on C++ classes and polymorphism.
Asking differently: let's say that I have a chosen architecture (eg. Windows, intel x86) and I am trying to write a new compiler. How do I know how to access class instance (vtable, member fields) provided by .dll lib compiled by another compiler? Or is it simply like that: M$ has written VC++, and since then it is unwritten standard, and every other compiler does it the same "for compatibility reasons"? And what about linux or other OS-es?
EDIT:
OK I admit, the example with DirectX was bad because of COM specification...
Another example: QT. I am using QT with mingw, but I know there are also available MSVC versions. I don't know if the difference is only in headers, or if shared libs (dll-s) are also different. If they are, does it mean that I have to distribute my app with qt libs included, so if anybody happens to have ones for a different compiler it will not get mixed-up? (Nice memory and code sharing then, right?). Or are they the same and there is some unwritten law about it anyway?
EDIT2:
I have installed a different qt version (msvc 2010) just to see what is and isn't shared. Seems that shared (are they really shared then) libraries are different. Seems that I really have to provide qt-libs with my app then... And this is no small thing (eg. QtGui 8-9MB). What about other, smaller libs, whose authors weren't so kind to provide versions for other compilers? Does it mean that I am stuck with their original compiler? What if I want to use two different libs that were compiled by different compilers?
Basically you're asking about an ABI.
In some cases (e.g., on the Itanium) there's a document that specifies the ABI, and essentially everybody follows the document.
In the case of DirectX, it's a bit the same: Microsoft has published specs for COM, so anybody who follows those specs can get interop with essentially any COM object (within reason -- a 64-bit compiler probably isn't going to work with a 16-bit COM object written for Windows 3.1).
For most other things, you're more or less on your own to figure things out. There's often at least a little published in the way of documentation, but at least in my experience it often skims over some details that end up important, isn't updated entirely dependably, and in some cases is just plain wrong. In most cases, it's also poorly organized so it's pretty much up to you to glean what you can from wherever you can find it, and when you run out of information (or patience) do some reverse engineering to fill in the missing parts.
Edit: For non-COM things like Qt libraries, you have a couple of choices. One is to link to the Qt libraries statically, so the Qt code you need gets linked directly into your executable. If you want to use a DLL, then yes, you're pretty much stuck with distributing one with your app that's going to be specific to the compiler you use to generate the application itself. A different compiler (or even a different version or set of compilation flags with the same compiler) will generally require a different DLL (possibly the same source code, but built to accommodate the compiler change).
There are exceptions to this, but they're pretty much like I've outlined above. For example, the Intel compiler for Windows normally uses Microsoft's standard library, and can use most (if not all) other libraries built for Microsoft's compiler as well. This is pretty much because Intel has bent over backward to assure that their compiler uses the same calling convention, name mangling scheme, etc., as Microsoft's though. It works because they put in a lot of effort to make it work, not because of any unwritten law, or anything like that.
DirectX is a COM-based technology and that is why it can be used in C with various compilers. Under the hood COM interface is a C-like structure which emulates the VMT.
Technically, DirectX has a midl-autogenerated C++ interface, but the general assertion that one can use classes exported in .dlls across different compilers is wrong.
Edit1:
QT dlls built using MSVC won't be compatible with the ones built by gcc, unfortunately, because of the natural reasons: different C++ compiler - different ABI and different run-time library (mingw uses older MSVCRT and that is why pure-C .dlls can be consumed in MSVC and vice versa). Some compilers incidentally or partially intentionally match their ABIs, but this is definitely not the case with MSVC/gcc. QT can also be built as a static library, so to redistribute things one might just link statically.
The name mangling for C++ classes in DLLs depends largely on the compiler front-end used. Many commercial compilers from well-known companies use EDG's C++ parser and that is why the class names an overloaded functions have similar or matching signatures.
Edit2:
"What if I want to use two different libs that were compiled by different compilers?"
If you desperately need some specific functionality from both libraries (I mean some concrete operation, not the framework overall), then the way to go without having the library's source code is to write some wrapper and compile this wrapper to the C-style .dll.
Think of this like having "two different C++-es, C++-1 and C++-2". The problem with ABI/Runtime is no different from using Pascal code from C or linking with some older Fortran libs.
I came from the Linux world and know a lot of articles about maintaining backwards binary compatibility (BC) of a dynamic library API written in C++ language. One of them is "Policies/Binary Compatibility Issues With C++" based on the Itanium C++ ABI, which is used by the GCC compiler. But I can't find anything similar for the Microsoft C++ compiler (from MSVC).
I understand that most of the techniques are applicable to the MS C++ compiler and I would like to discover compiler-specific issues related to ABI differences (v-table layout, mangling, etc.)
So, my questions are the following:
Do you know any differences between MS C++ and GCC compilers when maintaining BC?
Where can I find information about MS C++ ABI or about maintaining BC of API in Windows?
Any related information will be highly appreciated.
Thanks a lot for your help!
First of all these policies are general and not refer to gcc only. For example: private/public mark in functions is something specific to MSVC and not gcc.
So basically these rules are fully applicable to MSVC and general compiler as well.
But...
You should remember:
GCC/C++ keeps its ABI stable since 3.4 release and it is about 7 years (since 2004) while MSVC breaks its ABI every major release: MSVC8 (2005), MSVC9 (2008), MSVC10 (2010) are not compatible with each other.
Some frequently flags used with MSVC can break ABI as well (like Exceptions model)
MSVC has incompatible run-times for Debug and Release modes.
So yes you can use these rules, but as in usual case of MSVC it has much more quirks.
See also "Some thoughts on binary compatibility" and Qt keeps they ABI stable with MSVC as well.
Note I have some experience with this as I follow these rules in CppCMS
On Windows, you basically have 2 options for long term binary compatibility:
COM
mimicking COM
Check out my post here. There you'll see a way to create DLLs and access DLLs in a binary compatible way across different compilers and compiler versions.
C++ DLL plugin interface
The best rule for MSVC binary compatibility is use a C interface. The only C++ feature you can get away with, in my experience, is single-inheritance interfaces. So represent everything as interfaces which use C datatypes.
Here's a list of things which are not binary compatible:
The STL. The binary format changes even just between debug/release, and depending on compiler flags, so you're best off not using STL cross-module.
Heaps. Do not new / malloc in one module and delete / free in another. There are different heaps which do not know about each other. Another reason the STL won't work cross-modules.
Exceptions. Don't let exceptions propagate from one module to another.
RTTI/dynamic_casting datatypes from other modules.
Don't trust any other C++ features.
In short, C++ has no consistent ABI, but C does, so avoid C++ features crossing modules. Because single inheritance is a simple v-table, you can usefully use it to expose C++ objects, providing they use C datatypes and don't make cross-heap allocations. This is the approach used by Microsoft themselves as well, e.g. for the Direct3D API. GCC may be useful in providing a stable ABI, but the standard does not require this, and MSVC takes advantage of this flexibility.