I have seen some very well questions but I decided I would ask my own because they weren't quite what I wanted to know.
There was a lot of talk about how you shouldn't pass std::string into a function (from a DLL) because everything has to match CRT, Platform version, etc but can you safely pass a const *char to a function and then inside of the .dll use std::string's?
The reason I ask is because I recently discovered how amazingly useful C++ strings are and I want to use them with my .DLL but I have discovered they are unsafe to use with .dll's because of runtime templates and such.
So my question is mainly, is there a "safe" way to use std::string in a .dll?
It's entirely safe to use it internally.
And don't worry too much about using a std::string across DLL boundaries either. You can't use it in a public interface, but when you ship a DLL with your own program you control both sides. It's quite common to have both the DLL and the EXE in two Visual Studio projects within one VS solution. That ensures they're built with the same platform version etc.
Feel free to use std::string in DLL interface. But you should make sure that DLL and Application must be built with the same std library to avoid std::string boundary problem.
There is no ABI documented, so technically, it can change with the compiler version, and can depend on compiler settings.
You should be on the safe side if caller and DLL are built by the same version of Visual C++.
However this means: you cannot use it in a public interface, and you are giving up compatibility between versions (say, running an oder EXE with a newer DLL - after upgrading your compiler).
Related
Recently I have been using the Assimp library for my hobby 3D graphics engine project in order to load 3D models.
The library comes in the form of a DLL accompanied by an import library (LIB) and an EXP file (only when building from latest sources; I understand it holds information about the exported functions, similar to the LIB)
Now, the surprise part, which made me ask this question, is the following:
I was able to correctly, without any build/link errors or run-time errors use the C++ interface of two different versions of the library:
The pre-built library, release 3.1.1, which was built and is dependent on an older runtime (MSVCP110.DLL, MSVCR110.DLL) (which I later decided to change to my own build since there was a linking error in that binary build)
My own source build made with the VS 2013 compiler, depending on MSVCP120.DLL and VCRUNTIME120.DLL
I was able to use both of these library binaries with my executable binary, when built with either:
VS 2013 compiler
VS 2015 compiler
My surprise of the success of using the library like described above, without any error, is caused by:
reading all over the internet about how C++ libraries and interfaces are not binary compatible (ABI) between different compiler versions, and thus cannot be used together with binaries built with other compilers. Exceptions: usage through C interfaces, which maintain the ABI compatibility; COM libraries (COM interfaces were built to overcome the same issue).
Using binaries built with different compilers can raise problems because of function name mangling (may be solved by the import library?!), dynamic memory allocation and freeing done by different versions of the allocators/destructors.
Can anyone provide some insight on this use case, on why I successfully managed to use a VS 2013 built dynamic library together with a VS 2015 toolset application without any of the above described problems? Also, the same applies when using the pre-built binary both with a VS 2013 build and a VS 2015 build of my 3D engine application.
Is it an isolated case or does the Assimp library itself take care of the problems and is implemented in such a way to ensure compatibility?
The actual usage of the library was pretty straightforward: I used a local stack variable of an Importer object (class) and from there on manipulated and read information from a bunch of structs; no other classes (I think)
As long as all the classes and memory used in the DLL is allocated and freed and maintained inside the class, and you don't use standard library container objects across the interface boundary you are completely safe to do it.
Even the use of std:string isn't allowed across DLL boundaries if the modules use different compilers or even have a different CRT usage, or at least a different CRT.
In fact, using plain interfaces (pure virtual classes) is safe for all VS C++ compilers. Also using extern "C" is safe.
So if the structures you exchange are just PODs or have simple plain data objects, and as long as allocation and destruction is done all inside the DLL you are free to mix and use DLLs build with different compilers.
So the best examples are the DLLs of the windows OS. They use a clearly defined interface of simple data structures. Memory management and allocation (like windows, menus, GDI objects) is done via a transparent handle. The DLLs are just blackboxes for you.
Hope that I met all points.
I'm working on an application that needs to be compatible up to Windows XP (yea... I know...), my colleagues are arguing that they don't want to use std::string because it might load some dlls that might change the code behavior.
I'm not sure either if they are right or wrong here. At some point, there has to be some common grounds where the application can be loaded.
And so given the context where an application have to be self contained as much as possible, would this application be required to load a dll in order to use the stl or string or else coming from the standard libraries?
Also, assuming I use the -static-libstdc++ flag, by which order of magnitude will the executable be bigger?
In windows, STL is supported using CRT libraries. These libraries require the run time DLLs to be deployed before you running the application. Compiling code with different version of visual studio will create dependency on a particular version of CRT. Compiling code with vs2013 will need a version of CRT much different than the vs2010. So you cannot use/pass STL objects from one version of CRT to another dll consuming a different version. Please go through microsoft articles before consuming the CRT libraries. Below article is for vs2013:
http://msdn.microsoft.com/en-us/library/abx4dbyh.aspx
I would suggest you to use ATL:CString which is much easier to implement & the static version library also much compact than CRT libraries.
I appreciate that similar questions have been asked before - but reading them none of them quite address our issue so thought I'd ask for any insight.
[TL;DR] Version:
Is it possible / easy to create a shim dll for VS2005 CRT which then delegates all calls to VS2013 CRT?
Is it easier to downgrade a VS2013 build so that it uses the VS2005 CRT?
Or are both of these ideas just plain mad?
Our Situation:
Our C++ applications are built using modern versions of C++ compilers (VS2013,gcc 4.8 depending on target platform) - some of our dependencies we use will require a modern version since they use C++11.
Our applications also use run time linking (.dll or .so) to package the various application components.
On windows to make life simple we compile using /MD (or/MDd for debug) ie use the CRT in a dll using multi threaded version - any dependencies we build are recompiled to use the same CRT.
Using the same CRT for all dependencies (in a separate dll) means we avoid problems if memory is allocated in one dll and freed in another since the same heap is used.
Our Problem
We have one dependency which is provided by a 3rd party (I won't name and shame them here - but you know who you are!) as a pre-compiled dll/so rather than as source.
This has been compiled using VS2005 (MSVC8) on Windows and uses the MSVC8 crt.
Their previous version of this dependency worked fine. Their api to this dependency handled allocation and de-allocation of memory correctly so even though it was using a different CRT version to the rest of our application it was not problematic.
Unfortunately their new ("improved"!) version has made heavy use of C++ templates in their api - these templates expand out in our calling code and result in attempts to delete memory allocated within their dll from within our calling code. Since the 2 areas are using different heaps this is very problematic.
Possible Solutions:
To solve this problem we can see the following possible solutions
1]Get the supplier to rewrite (We've asked for this and it is the long term solution but won't happen for some time)
2]Get the supplier to re-compile using VS2013 so we have the same CRT both sides (just plain won't happen apparently - have asked).
3]Use the VS2005 CRT for all our code (We can't simply use VS2005 to build our code since we have a requirement for C++11 support)
4]Shim the VS2005 CRT that 3rd party dll is expecting - ie build something which looks like VS2005 CRT but is actually using the VS2013 CRT.
5]Wrap the 3rd party dll - ie build a dll compiled with VS2005 which contains all the calls to the 3rd party dependency - and provide correct memory handling so that it works with our other (VS2013 CRT using) dlls
And Finally the Question:
Option 5] will probably be the most amount of work (but have the greatest likelihood of success) - its basically option 1] but under our control so we don't have to wait for the 3rd party.
Before committing to this route though I thought I'd ask to see if others have done anything similar to 3] or 4] or some other solution we haven't thought of.
Option 4] (building a shimmed version of CRT) seems possibly the simplest since I imagine lots of the backward compatibility work will be already done in the VS2013 CRT already - so is just a case of finding out the details and fwd'ing the 2005 version of the call to the 2013 version - in many cases I imagine even the signature is identical so is just an implib entry.
However when I looked via google I couldn't find any examples of anyone building shim dlls for CRT version incompatibility problems - so possibly its much harder than I think (or my google fu is severely lacking).
So thought I'd just ask if anyone has done anything similar / knows why its a really silly idea?
There's no reasonable way in which you can achieve that shimming.
The dealbreaker is that the STL implementation has changed, and those are templates. The DLL almost certainly has a mix of inlined and non-inlined calls to the VS2005 STL. If you swap out the VS2005 for the VS2013 CRT, their DLL will have a mix of inlined VS2005 and non-inlined VS2013 calls. A definite disaster. That is assuming the VS2013 CRT even has the non-inlined versions needed.
That said, you can provider your own new and delete in your code. Those can forward to the corresponding VS2005 versions which you retrieve at runtime via GetProcAddress. This could perhaps be sufficient - no guarantees.
I'm using Visual Studio 2010 to build my DLL library.
And, other programmer who's using Visual Studio 2005 wants to use my DLL library. He can compile with my dll, but, when running his application, it just crashes with bad_alloc exception. I assume that's because of different CRT version.
When building my DLL library, I tried both dynamic linking of CRT(/MD) and static linking of CRT(/MT), but both failed.
So, can't I make DLL library that can be used by lower version of visual studio? If not, how I can I do that?
As far as I know you have to use only primitive types dll interface. it's because even in a same compiler memmory layout changes by only changeing compile flags, think of what would happen by changing compiler. and that could lead to a largescale undifined behavior.
and use the following format for your function interfaces:
extern "C" __declspec(dllexport) void doSomething(int input);
You can only export abstract base classes (containing at least one pure virtual function) with no data members for binary compatibility, I guess that was behind the question about your dll prototype. Here http://chadaustin.me/cppinterface.html is a good discussion on the issue.
Most easiest solution: give the other programmer the source code of your DLL and let him compile the DLL by himself against the old CRT. If that's not suitable (because you don't want to give the source code out of your hands, or your DLL does not compile with VC++ 2005), either you have to get a VC++ 2005 compiler, or the other one VC++ 2010.
If you have VS2005 installed on you machine, you can use the VS2010 new Platform Toolset feature to use the VS2005 compiler.
Its under Project Properties->General->Platform Toolset. VS100 is vs2010, VS90 goes for 2008 and (i think) VS80 is what you need (for 2005...).
AFAIK, trying to use different toolset built DLL will be harder (though possible, as you don't link with it).
Cheers
Most likely, you are using things (like C++ containers or types) that have changed between versions of VC++ compiler implementations, and passing these across DLL boundaries between DLLs built with different VC++ versions is likely to fail.
You need to build the DLL with that specific compiler (VC++ 8.0 for VS2005, VC++ 9.0 for 2008, VC++ 10.0 for 2010...) in order for the other programmer to use it. That, or he has to upgrade his Visual Studio to use the same version as you.
This is a fairly old problem and this is one of the reasons for COM. I would suggest you do the following -
Create an abstract base class and (aka interface) (say IExportedFunctionality) that exposes the methods that your currently exported class (say CExportedClass) offers and has a virtual destructor.
Dont export CExportedClass any more.
Derive CExportedClass from IExportedFunctionality and ensure all the methods are implemented.
Export 2 functions from your Dll named
a. GetExportedClass which returns a pointer to a new instance of CExportedClass upcast to IExportedFunctionality*.
b. FreeExportedClass which accepts a IExportedFunctionality* and deletes it.
Now all you need to do is provide the header file with the declaration of IExportedFunctionality. You can even do away with the lib file as your users can make use of LoadLibrary and GetProcAddress to call GetExportedClass and FreeExportedClass.
Note: IExportedFunctionality should have
Only pure virtual functions - since IExportedFunctionality is an interface not implementation.
Pure virtual destructor - so that doing a delete on a IExportedFunctionality* will translate into a call to CExportedClass's destructor
Do not have ANY data members in IExportedFunctionality (As user383522 pointed out binary layout of the class can vary across compilers and under different byte alignments)
Due to how Microsoft implements the heap in their non-DLL versions of the runtime, returning a C++ object from a DLL can cause problems:
// dll.h
DLL_EXPORT std::string somefunc();
and:
// app.c - not part of DLL but in the main executable
void doit()
{
std::string str(somefunc());
}
The above code runs fine provided both the DLL and the EXE are built with the Multi-threaded DLL runtime library.
But if the DLL and EXE are built without the DLL runtime library (either the single or multi-threaded versions), the code above fails (with a debug runtime, the code aborts immediately due to the assertion _CrtIsValidHeapPointer(pUserData) failing; with a non-debug runtime the heap gets corrupted and the program eventually fails elsewhere).
Two questions:
Is there a way to solve this other then requiring that all code use the DLL runtime?
For people who distribute their libraries to third parties, how do you handle this? Do you not use C++ objects in your API? Do you require users of your library to use the DLL runtime? Something else?
Is there a way to solve this other then requiring that all code use the DLL runtime?
Not that I know of.
For people who distribute their libraries to third parties, how do you handle this? Do you not use C++ objects in your API? Do you require users of your library to use the DLL runtime? Something else?
In the past I distributed an SDK w/ dlls but it was COM based. With COM all the marshalling of parameters and IPC is done for you automatically. Users can also integrate in with any language that way.
Your code has two potential problems: you addressed the first one - CRT runtime. You have another problem here: the std::string could change among VC++ versions. In fact, it did change in the past.
The safe way to deal with is to export only C basic types. And exports both create and release functions from the DLL. Instead of export a std::string, export a pointer.
__declspec(export) void* createObject()
{
std::string* p = __impl_createObject();
return (void*)p;
}
__declspec(export) void releasePSTRING(void* pObj)
{
delete ((std::string*)(pObj));
}
There is a way to deal with this, but it's somewhat non-trivial. Like most of the rest of the library, std::string doesn't allocate memory directly with new -- instead, it uses an allocator (std::allocator<char>, by default).
You can provide your own allocator that uses your own heap allocation routines that are common to the DLL and the executable, such as by using HeapAlloc to obtain memory, and suballocate blocks from there.
If you have a DLL that you want to distribute and you don't want to bind your callers to a specific version to the C-Runtime, do any of the following:
I. Link the DLL to the static version of the C-Runtime library. From the Visual Studio Project Properties page, select the tab for Configuration Properties-> C/C++ -> Code Generation. These an option for selecting the "Runtime library". Select "Multithread" or "Multithreaded Debug" instead of the DLL versions. (Command line equilvalent is /MT or /MTd)
There are a couple of different drawbacks to this approach:
a. If Microsoft ever releases a security patch of the CRT, your shipped components may be vulnerable until your recompile and redist your binary.
b. Heap pointers allocated by "malloc" or "new" in the DLL can not be "free"d or "delete"d by the EXE or other binary. You'll crash otherwise. The same holds true for FILE handles created by fopen. You can't call fopen in teh DLL and expect the EXE to be able to fclose on it. Again, crash if you do. You will need to build the interface to your DLL to be resilient to all of these issues. For starters, a function that returns an instance to std::string is likely going to be an issue. Provide functions exported by your DLL to handle the releasing of resources as needed.
Other options:
II. Ship with no c-runtime depedency. This is a bit harder. You first have to remove all calls to the CRT out of your code, provide some stub functions to get the DLL to link, and specify "no default libraries" linking option. Can be done.
III. C++ classes can be exported cleanly from a DLL by using COM interface pointers. You'll still need to address the issues in 1a above, but ATL classes are a great way to get the overhead of COM out of the way.
The simple fact here is, Microsofts implementation aside, C++ is NOT an ABI. You cannot export C++ objects, on any platform, from a dynamic module, and expect them to work with a different compiler or language.
Exporting c++ classes from Dlls is a largely pointless excercise - because of the name mangling, and lack of support in c++ for dynamically loaded classes - the dlls have to be loaded statically - so you loose the biggest benefit of splitting a project into dlls - the ability to only load functionality as needed.