Implications of using std::vector in a dll exported function - c++

I have two dll-exported classes A and B. A's declaration contains a function which uses a std::vector in its signature like:
class EXPORT A{
// ...
std::vector<B> myFunction(std::vector<B> const &input);
};
(EXPORT is the usual macro to put in place _declspec(dllexport)/_declspec(dllimport) accordingly.)
Reading about the issues related to using STL classes in a DLL interface, I gather in summary:
Using std::vector in a DLL interface would require all the clients of that DLL to be compiled with the same version of the same compiler because STL containers are not binary compatible. Even worse, depending on the use of that DLL by clients conjointly with other DLLs, the ''instable'' DLL API can break these client applications when system updates are installed (e.g. Microsoft KB packages) (really?).
Despite the above, if required, std::vector can be used in a DLL API by exporting std::vector<B> like:
template class EXPORT std::allocator<B>;
template class EXPORT std::vector<B>;
though, this is usually mentioned in the context when one wants to use std::vector as a member of A (http://support.microsoft.com/kb/168958).
The following Microsoft Support Article discusses how to access std::vector objects created in a DLL through a pointer or reference from within the executable (http://support.microsoft.com/default.aspx?scid=kb;EN-US;Q172396). The above solution to use template class EXPORT ... seems to be applicable too. However, the drawback summarized under the first bullet point seems to remain.
To completely get rid of the problem, one would need to wrap std::vector and change the signature of myFunction, PIMPL etc..
My questions are:
Is the above summary correct, or do I miss here something essential?
Why does compilation of my class 'A' not generate warning C4251 (class 'std::vector<_Ty>' needs to have dll-interface to be used by clients of...)? I have no compiler warnings turned off and I don't get any warning on using std::vector in myFunction in exported class A (with VS2005).
What needs to be done to correctly export myFunction in A? Is it viable to just export std::vector<B> and B's allocator?
What are the implications of returning std::vector by-value? Assuming a client executable which has been compiled with a different compiler(-version). Does trouble persist when returning by-value where the vector is copied? I guess yes. Similarly for passing std::vector as a constant reference: could access to std::vector<B> (which might was constructed by an executable compiled with a different compiler(-version)) lead to trouble within myFunction? I guess yes again..
Is the last bullet point listed above really the only clean solution?
Many thanks in advance for your feedback.

Unfortunately, your list is very much spot-on. The root cause of this is that DLL-to-DLL or DLL-to-EXE is defined on the level of the operating system, while the the interface between functions is defined on the level of a compiler. In a way, your task is similar (although somewhat easier) to that of client-server interaction, when the client and the server lack binary compatibility.
The compiler maps what it can to the way the DLL importing and exporting is done in a particular operating system. Since language specifications give compilers a lot of liberty when it comes to binary layout of user-defined types and sometimes even built-in types (recall that the exact size of int is compiler-dependent, as long as minimal sizing requirements are met), importing and exporting from DLLs needs to be done manually to achieve binary-level compatibility.
When you use the same version of the same compiler, this last issue above does not create a problem. However, as soon as a different compiler enters the picture, all bets are off: you need to go back to the plainly-typed interfaces, and introduce wrappers to maintain nice-looking interfaces inside your code.

I've been having the same problem and discovered a neat solution to it.
Instead of passing std:vector, you can pass a QVector from the Qt library.
The problems you quote are then handled inside the Qt library and you do not need to deal with it at all.
Of course, the cost is having to use the library and accept its slightly worse performance.
In terms of the amount of coding and debugging time it saves you, this solution is well worth it.

Related

Exporting C++ from dll - Domain and collections

There are several questions already on stack overflow regarding classes and functions across dll boundries.
Alot of them reference this article https://www.codeproject.com/Articles/28969/HowTo-Export-C-classes-from-a-DLL
I want to drill down a bit and ask some more specific questions.
The summary of most of those questions is
1) Exporting templates and the STL across dll boundries is bad in general
2) You cannot allocated on one side of a dll boundry and release in another.
3) Your options are
a) Export only pure C interfaces
b) Export abstract classes that are free to use templated and the STL in their internal implementation.
Ok. Well, I've never ever done this in 10 years of coding C++ and it has never nipped me in the butt. I've followed what the article calls "The naive approach." However, I resent being called naive a little bit :), because I've always had 100% of the source in every solution and always built 100% of it with the same compiler and settings, and have been mindful of such.
Recently, I've collected evidence that I have a problem with an object being allocated in on one side of a dll boundry and released in another. So, I may find myself having to change my ways. Perhaps, this is a result from now using third party libraries from Nuget, rather than compiling them from source.
So, I wonder, how can I employ these options when it comes to domain objects (Or in my case, objects with no functionality)
If I can never use std::string, then I assume everything exported has to use c-style strings and the amount of copying MBs of text just increased 500X in my solution, if I want to use std:string internally in my dlls.
Worse, I am pondering the question: How do you then have domain objects that contain a collection?
Consider a class in my "naive dll"
class CustomerList
{
public:
// where this is not trivial
void AddCustomer(const Customer & customer);
private:
std::vector<Customer> m_customers;
};
and a function or method in my dll
SubmitCustomers(const CustomerList & customers);
How do I make this safe?
Do I need to go reinvent vector in pure C without allocations?
Do I have to resort to arrays with sizes? yuck? and then do yet more copying to STL containers internally?

Export dll method which returns STL object [duplicate]

I'm trying to export classes from a DLL that contain objects such as std::vectors and std::strings - the whole class is declared as DLL export through:
class DLL_EXPORT FontManager {
The problem is that for members of the complex types I get this warning:
warning C4251: 'FontManager::m__fonts' : class 'std::map<_Kty,_Ty>' needs to have dll-interface to be used by clients of class 'FontManager'
with
[
_Kty=std::string,
_Ty=tFontInfoRef
]
I'm able to remove some of the warnings by putting the following forward class declaration before them even though I'm not changing the type of the member variables themselves:
template class DLL_EXPORT std::allocator<tCharGlyphProviderRef>;
template class DLL_EXPORT std::vector<tCharGlyphProviderRef,std::allocator<tCharGlyphProviderRef> >;
std::vector<tCharGlyphProviderRef> m_glyphProviders;
Looks like the forward declaration "injects" the DLL_EXPORT for when the member is compiled but is it safe?
Does it really change anything when the client compiles this header and uses the std:: container on his side?
Will it make all future uses of such a container DLL_EXPORT (and possibly not inline)?
And does it really solve the problem that the warning tries to warn about?
Is this warning anything I should be worried about or would it be best to disable it in the scope of these constructs?
The clients and the DLL will always be built using the same set of libraries and compilers and those are header only classes...
I'm using Visual Studio 2003 with the standard STD library.
Update
I'd like to target you more though as I see the answers are general and here we're talking about std containers and types (such as std::string) - maybe the question really is:
Can we disable the warning for standard containers and types available to both the client and the DLL through the same library headers and treat them just as we'd treat an int or any other built-in type? (It does seem to work correctly on my side)
If so would should be the conditions under which we can do this?
Or should maybe using such containers be prohibited or at least ultra care taken to make sure no assignment operators, copy constructors etc will get inlined into the DLL client?
In general I'd like to know if you feel designing a DLL interface having such objects (and for example using them to return stuff to the client as return value types) is a good idea or not and why, I'd like to have a "high level" interface to this functionality...
Maybe the best solution is what Neil Butterworth suggested - creating a static library?
When you touch a member in your class from the client, you need to provide a DLL-interface.
A DLL-interface means, that the compiler creates the function in the DLL itself and makes it importable.
Because the compiler doesn't know which methods are used by the clients of a DLL_EXPORTED class it must enforce that all methods are dll-exported.
It must enforce that all members which can be accessed by clients must dll-export their functions too. This happens when the compiler is warning you of methods not exported and the linker of the client sending errors.
Not every member must be marked with with dll-export, e.g. private members not touchable by clients. Here you can ignore/disable the warnings (beware of compiler generated dtor/ctors).
Otherwise the members must export their methods.
Forward declaring them with DLL_EXPORT does not export the methods of these classes. You have to mark the according classes in their compilation-unit as DLL_EXPORT.
What it boils down to ... (for not dll-exportable members)
If you have members which aren't/can't be used by clients, switch off the warning.
If you have members which must be used by clients, create a dll-export wrapper or create indirection methods.
To cut down the count of externally visible members, use approaches such as the PIMPL idiom.
template class DLL_EXPORT std::allocator<tCharGlyphProviderRef>;
This does create an instantiation of the template specialization in the current compilation unit. So this creates the methods of std::allocator in the dll and exports the corresponding methods. This does not work for concrete classes as this is only an instantiation of template classes.
That warning is telling you that users of your DLL will not have access to your container member variables across the DLL boundary. Explicitly exporting them makes them available, but is it a good idea?
In general, I'd avoid exporting std containers from your DLL. If you can absolutely guarantee your DLL will be used with the same runtime and compiler version you'd be safe. You must ensure memory allocated in your DLL is deallocated using the same memory manager. To do otherwise will, at best, assert at runtime.
So, don't expose containers directly across DLL boundaries. If you need to expose container elements, do so via accessor methods. In the case you provided, separate the interface from the implementation and expose the inteface at the DLL level. Your use of std containers is an implementation detail that the client of your DLL shouldn't need to access.
Alternatively, do what Neil suggest and create a static library instead of a DLL. You lose the ability to load the library at runtime, and consumers of your library must relink anytime you change your library. If these are compromises you can live with, a static library would at least get you past this problem. I'll still argue you're exposing implementation details unnecessarily but it might make sense for your particular library.
There are other issues.
Some STL containers are "safe" to export (such as vector), and some aren't (e.g. map).
Map for instance is unsafe because it (in the MS STL distribution anyway) contains a static member called _Nil, the value of which is compared in iteration to test for the end. Every module compiled with STL has a different value for _Nil, and so a map created in one module will not be iterable from another module (it never detects the end and blows up).
This would apply even if you statically link to a lib, since you can never guarantee what the value of _Nil will be (it's uninitialised).
I believe STLPort doesn't do this.
One alternative that few people seem to consider is not to use a DLL at all but to link statically against a static .LIB library. If you do that, all the issues of exporting/importing go away (though you will still have name-mangling issues if you use different compilers). You do of course lose the features of the DLL architecture, such as run-time loading of functions, but this can be a small price to pay in many cases.
The best way I found to handle this scenario is:
create your library, naming it with the compiler and stl versions included in the library name, exactly like boost libraries do.
examples:
- FontManager-msvc10-mt.dll for dll version, specific for MSVC10 compiler, with the default stl.
- FontManager-msvc10_stlport-mt.dll for dll version, specific for MSVC10 compiler, with the stl port.
- FontManager-msvc9-mt.dll for dll version, specific for MSVC 2008 compiler, with the default stl
- libFontManager-msvc10-mt.lib for static lib version, specific for MSVC10 compiler, with the default stl.
following this pattern, you will avoid problems related with different stl implementations. remember, the stl implementation in vc2008 differs from the stl implementation in the vc2010.
See your example using boost::config library:
#include <boost/config.hpp>
#ifdef BOOST_MSVC
# pragma warning( push )
# pragma warning( disable: 4251 )
#endif
class DLL_EXPORT FontManager
{
public:
std::map<int, std::string> int2string_map;
}
#ifdef BOOST_MSVC
# pragma warning( pop )
#endif
Found this article. In short Aaron has the 'real' answer above; Don't expose standard containers across library boundaries.
Though this thread is pretty old, I found a problem recently, which made me think again about having templates in my exported classes:
I wrote a class which had a private member of type std::map. Everything worked quite well untill it got compiled in release mode, Even when used in a build system, which ensures that all compiler settings are the same for all targets. The map was completely hidden and nothing was directly exposed to the clients.
As a result the code was just crashing in release mode. I gues, because different binary std::map instances were created for implementation and client code.
I guess the C++ Standard is not saying anaything about how this shall be handled for exported classes as this is pretty much compiler specific. So I guess the biggest portability rule is to just expose Interfaces and use the PIMPL idiom as much as possible.
Thanks for any enlightenment
In such cases, consider the uses of the pimpl idiom. Hide all the complex types behind a single void*. Compiler typically fails to notice that your members are private and all methods included in the DLL.
none of the workarounds above are acceptable with MSVC because of the static data members inside template classes like stl containers
each module (dll/exe) gets its own copy of each static definition...wow! this will lead to terrible things if you somehow 'export' such data (as 'pointed' above)...so don't try this at home
see http://support.microsoft.com/kb/172396/en-us
Exporting classes containing std:: objects (vector, map, etc) from a dll
Also see Microsoft's KB 168958 article How to export an instantiation of a Standard Template Library (STL) class and a class that contains a data member that is an STL object. From the article:
To Export an STL Class
In both the DLL and the .exe file, link with the same DLL version of the C run time. Either link both with Msvcrt.lib (release build) or
link both with Msvcrtd.lib (debug build).
In the DLL, provide the __declspec specifier in the template instantiation declaration to export the STL class instantiation from
the DLL.
In the .exe file, provide the extern and __declspec specifiers in the template instantiation declaration to import the class from the
DLL. This results in a warning C4231 "nonstandard extension used :
'extern' before template explicit instantiation." You can ignore this
warning.
And:
To Export a Class Containing a Data Member that Is an STL Object
In both the DLL and the .exe file, link with the same DLL version of the C run time. Either link both with Msvcrt.lib (release build) or
link both with Msvcrtd.lib (debug build).
In the DLL, provide the __declspec specifier in the template instantiation declaration to export the STL class instantiation from
the DLL. NOTE: You cannot skip step 2. You must export the
instantiation of the STL class that you use to create the data member.
In the DLL, provide the __declspec specifier in the declaration of the class to export the class from the DLL.
In the .exe file, provide the __declspec specifier in the declaration of the class to import the class from the DLL. If the
class you are exporting has one or more base classes, then you must
export the base classes as well. If the class you are exporting
contains data members that are of class type, then you must export the
classes of the data members as well.
If you use a DLL make initialization of all objects at event "DLL PROCESS ATTACH" and export a pointer to its classes/objects.
You may provide specific functions to create and destroy objects and functions to obtain the pointer of the objects created, so you can encapsulate these calls in a wrapper class of access at include file.
The best approach to use in such scenarios is to use the PIMPL design pattern.

Is it safe to use strings as private data members in a class used across a DLL boundry?

My understanding is that exposing functions that take or return stl containers (such as std::string) across DLL boundaries can cause problems due to differences in STL implementations of those containers in the 2 binaries. But is it safe to export a class like:
class Customer
{
public:
wchar_t * getName() const;
private:
wstring mName;
};
Without some sort of hack, mName is not going to be usable by the executable, so it won't be able to execute methods on mName, nor construct/destruct this object.
My gut feeling is "don't do this, it's unsafe", but I can't figure out a good reason.
It is not a problem. Because it is trumped by the bigger problem, you cannot create an object of that class in code that lives in a module other than the one that contains the code for the class. Code in another module cannot accurately know the required object size, their implementation of the std::string class may well be different. Which, as declared, also affects the size of the Customer object. Even the same compiler cannot guarantee this, mixing optimized and debugging builds of these modules for example. Albeit that this is usually pretty easy to avoid.
So you must create a class factory for Customer objects, a factory that lives in that same module. Which then automatically implies that any code that touches the "mName" member also lives in the same module. And is therefore safe.
Next step then is to not expose Customer at all but expose an pure abstract base class (aka interface). Now you can prevent the client code from creating an instance of Customer and shoot their leg off. And you'll trivially hide the std::string as well. Interface-based programming techniques are common in module interop scenarios. Also the approach taken by COM.
As long as the allocator of instances of the class and deallocator are of the same settings, you should be ok, but you are right to avoid this.
Differences between the .exe and .dll as far as debug/release, code generation (Multi-threaded DLL vs. Single threaded) could cause problems in some scenarios.
I would recommend using abstract classes in the DLL interface with creation and deletion done solely inside the DLL.
Interfaces like:
class A {
protected:
virtual ~A() {}
public:
virtual void func() = 0;
};
//exported create/delete functions
A* create_A();
void destroy_A(A*);
DLL Implementation like:
class A_Impl : public A{
public:
~A_Impl() {}
void func() { do_something(); }
}
A* create_A() { return new A_Impl; }
void destroy_A(A* a) {
A_Impl* ai=static_cast<A_Impl*>(a);
delete ai;
}
Should be ok.
Even if your class has no data members, you cannot expect it to be usable from code compiled with a different compiler. There is no common ABI for C++ classes. You can expect differences in name mangling just for starters.
If you are prepared to constrain clients to use the same compiler as you, or provide source to allow clients to compile your code with their compiler, then you can do pretty much anything across your interface. Otherwise you should stick to C style interfaces.
If you want to provide an object oriented interface in a DLL that is truly safe, I would suggest building it on top of the COM object model. That's what it was designed for.
Any other attempt to share classes between code that is compiled by different compilers has the potential to fail. You may be able to get something that seems to work most of the time, but it can't be guaraneteed to work.
The chances are that at some point you're going to be relying on undefined behaviour in terms of calling conventions or class structure or memory allocation.
The C++ standard does not say anything about the ABI provided by implementations. Even on a single platform changing the compiler options may change binary layout or function interfaces.
Thus to ensure that standard types can be used across DLL boundaries it is your responsibility to ensure that either:
Resource Acquisition/Release for standard types is done by the same DLL. (Note: you can have multiple crt's in a process but a resource acquired by crt1.DLL must be released by crt1.DLL.)
This is not specific to C++. In C for example malloc/free, fopen/fclose call pairs must each go to a single C runtime.
This can be done by either of the below:
By explicitly exporting acquisition/release functions ( Photon's answer ). In this case you are forced to use a factory pattern and abstract types.Basically COM or a COM-clone
Forcing a group of DLL's to link against the same dynamic CRT. In this case you can safely export any kind of functions/classes.
There are also two "potential bug" (among others) you must take care, since they are related to what is "under" the language.
The first is that std::strng is a template, and hence it is instantiated in every translation unit. If they are all linked to a same module (exe or dll) the linker will resolve same functions as same code, and eventually inconsistent code (same function with different body) is treated as error.
But if they are linked to different module (and exe and a dll) there is nothing (compiler and linker) in common. So -depending on how the module where compiled- you may have different implementation of a same class with different member and memory layout (for example one may have some debugging or profiling added features the other has not). Accessing an object created on one side with methods compiled on the other side, if you have no other way to grant implementation consistency, may end in tears.
The second problem (more subtle) relates to allocation/deallocaion of memory: because of the way windows works, every module can have a distinct heap. But the standard C++ does not specify how new and delete take care about which heap an object comes from. And if the string buffer is allocated on one module, than moved to a string instance on another module, you risk (upon destruction) to give the memory back to the wrong heap (it depends on how new/delete and malloc/free are implemented respect to HeapAlloc/HeapFree: this merely relates to the level of "awarness" the STL implementation have respect to the underlying OS. The operation is not itself destructive -the operation just fails- but it leaks the origin's heap).
All that said, it is not impossible to pass a container. It is just up to you to grant a consistent implementation between the sides, since the compiler and linker have no way to cross check.

How to implement an adapter framework in C++ that works in both Linux and Windows

Here is what I am trying to do:
I am developing a cross-platform IDE (Linux and Windows) that supports plug-ins. I need to support extensibility using an adapter framework similar to the one that Eclipse provides. See here for more details, but basically I need the following:
Let Adaptee and Adapted be completely unrelated classes which already exist and which we are not allowed to change in any way. I want to create an AdapterManager class which has a method
template <class Adaptee, class Adapted> Adapted* adapt( Adaptee* object);
which will create an instance of Adapted given an instance of Adaptee. How exactly the instance is created depends on an adapter function which will have to be registered with AdapterManager. Each new plug-in should be able to contribute adapter functions for arbitrary types.
Here are my thoughts about a possible solution and why it does not work:
C++11's RTTI functions and the type_info class provide a hash_code() method which returns a unique integer for each type in the program. See here. Thus AdapterManager could simply contain a map that given the hash codes for the Adaptee and Adapter classes returns a function pointer to the adapter function. This makes the implementation of the adapt() function above trivial:
template <class Adaptee, class Adapted> Adapted* AdapterManager::adapt( Adaptee* object)
{
AdapterMapKey mk( typeid(Adapted).hash_code(), typeid(Adaptee).hash_code());
AdapterFunction af = adapterMap.get(mk);
if (!af) return nullptr;
return (Adapted*) af(object);
}
Any plug-in can easily extend the framework by simply inserting an additional function into the map. Also note that any plug-in can try to adapt any class to any other class and succeed if there exists a corresponding adapter function registered with AdapterManager regardless of who registered it.
A problem with this is the combination of templates and plug-ins (shared objects / DLLs). Since two plug-ins can instantiate a template class with the same parameters, this could potentially lead to two separate instances of the corresponding type_info structures and potentially different hash_code() results, which will break the mechanism above. Adapter functions registered from one plug-in might not always work in another plug-in.
In Linux, the dynamic linker seems to be able to deal with multiple declarations of types in different shared libraries under some conditions according to this (point 4.2). However the real problem is in Windows, where it seems that each DLL will get its own version of a template instantiation regardless of whether it is also defined in other loaded DLLs or the main executable. The dynamic linker seems quite inflexible compared to the one used in Linux.
I have considered using explicit template instantiations which seems to reduce the problem, but still does not solve it as two different plug-ins might still instantiate the same template in the same way.
Questions:
Does anyone know of a way to achieve this in Windows? If you were allowed to modify existing classes, would this help?
Do you know of another approach to achieve this functionality in C++, while still preserving all the desired properties: no change to existing classes, works with templates, supports plug-ins and is cross-platform?
Update 1:
This project uses the Qt framework for many things including the plug-in infrastructure. Qt really helps with cross platform development. If you know of a Qt specific solution to the problem, that's also welcome.
Update 2:
n.m.'s comment made me realize that I only know about the problem in theory and have not actually tested it. So I did some testing in both Windows and Linux using the following definition:
template <class T>
class TypeIdTest {
public:
virtual ~TypeIdTest() {};
static int data;
};
template <class T> int TypeIdTest<T>::data;
This class is instantiated in two different shared libraries/DLLs with T=int. Both libraries are explicitly loaded at run-time. Here is what I found:
In Linux everything just works:
The two instantiations used the same vtable.
The object returned by typeid was at the same address.
Even the static data member was the same.
So the fact that the template was instantiated in multiple dynamically loaded shared libraries made absolutely no difference. The linker seems to simply use the first loaded instantiation and ignore the rest.
In Windows the two instantiations are 'somewhat' distinct:
The typeid for the different instances returns type_info objects at different addresses. These objects however are equal when tested with ==. The corresponding hash codes are also equal. It seems like on Windows equality between types is established using the type's name - which makes sense. So far so good.
However the vtables for the two instances were different. I'm not sure how much of a problem this is. In my tests I was able to use dynamic_cast to downcast an instance of TypeIdTest to a derived type across shared library boundaries.
What's also a problem is that each instantiation used its own copy of the static field data. That can cause a lot of problems and basically disallows static fields in template classes.
Overall, it seems that even in Windows things are not as bad as I thought, but I'm still reluctant to use this approach given that template instantiations still use distinct vtables and static storage. Does anyone know how to avoid this problem? I did not find any solution.
I think Boost Extension deals with exactly this problem domain:
http://boost-extension.redshoelace.com/docs/boost/extension/index.html
(in preparation for this library's submission to Boost for review)
In particular you'd be interested in what the author wrote in this blog post: "Resource Management Across DLL Boundaries:
RTTI does not always function as expected across DLL boundaries. Check out the type_info classes to see how I deal with that.
I'm not sure whether his solution is actually robust, but he sure gave this thought, before. In fact, there are some samples using Boost Extensions that you can give a go, you might want to use it.

Using C++ DLLs with different compiler versions

This question is related to "How to make consistent dll binaries across VS versions ?"
We have applications and DLLs built
with VC6 and a new application built
with VC9. The VC9-app has to use
DLLs compiled with VC6, most of
which are written in C and one in
C++.
The C++ lib is problematic due to
name decoration/mangling issues.
Compiling everything with VC9 is
currently not an option as there
appear to be some side effects.
Resolving these would be quite time
consuming.
I can modify the C++ library, however it must be compiled with VC6.
The C++ lib is essentially an OO-wrapper for another C library. The VC9-app uses some static functions as well as some non-static.
While the static functions can be handled with something like
// Header file
class DLL_API Foo
{
int init();
}
extern "C"
{
int DLL_API Foo_init();
}
// Implementation file
int Foo_init()
{
return Foo::init();
}
it's not that easy with the non-static methods.
As I understand it, Chris Becke's suggestion of using a COM-like interface won't help me because the interface member names will still be decorated and thus inaccessible from a binary created with a different compiler. Am I right there?
Would the only solution be to write a C-style DLL interface using handlers to the objects or am I missing something?
In that case, I guess, I would probably have less effort with directly using the wrapped C-library.
The biggest problem to consider when using a DLL compiled with a different C++ compiler than the calling EXE is memory allocation and object lifetime.
I'm assuming that you can get past the name mangling (and calling convention), which isn't difficult if you use a compiler with compatible mangling (I think VC6 is broadly compatible with VS2008), or if you use extern "C".
Where you'll run into problems is when you allocate something using new (or malloc) from the DLL, and then you return this to the caller. The caller's delete (or free) will attempt to free the object from a different heap. This will go horribly wrong.
You can either do a COM-style IFoo::Release thing, or a MyDllFree() thing. Both of these, because they call back into the DLL, will use the correct implementation of delete (or free()), so they'll delete the correct object.
Or, you can make sure that you use LocalAlloc (for example), so that the EXE and the DLL are using the same heap.
Interface member names will not be decorated -- they're just offsets in a vtable. You can define an interface (using a C struct, rather than a COM "interface") in a header file, thusly:
struct IFoo {
int Init() = 0;
};
Then, you can export a function from the DLL, with no mangling:
class CFoo : public IFoo { /* ... */ };
extern "C" IFoo * __stdcall GetFoo() { return new CFoo(); }
This will work fine, provided that you're using a compiler that generates compatible vtables. Microsoft C++ has generated the same format vtable since (at least, I think) MSVC6.1 for DOS, where the vtable is a simple list of pointers to functions (with thunking in the multiple-inheritance case). GNU C++ (if I recall correctly) generates vtables with function pointers and relative offsets. These are not compatible with each other.
Well, I think Chris Becke's suggestion is just fine. I would not use Roger's first solution, which uses an interface in name only and, as he mentions, can run into problems of incompatible compiler-handling of abstract classes and virtual methods. Roger points to the attractive COM-consistent case in his follow-on.
The pain point: You need to learn to make COM interface requests and deal properly with IUnknown, relying on at least IUnknown:AddRef and IUnknown:Release. If the implementations of interfaces can support more than one interface or if methods can also return interfaces, you may also need to become comfortable with IUnknown:QueryInterface.
Here's the key idea. All of the programs that use the implementation of the interface (but don't implement it) use a common #include "*.h" file that defines the interface as a struct (C) or a C/C++ class (VC++) or struct (non VC++ but C++). The *.h file automatically adapts appropriately depending on whether you are compiling a C Language program or a C++ language program. You don't have to know about that part simply to use the *.h file. What the *.h file does is define the Interface struct or type, lets say, IFoo, with its virtual member functions (and only functions, no direct visibility to data members in this approach).
The header file is constructed to honor the COM binary standard in a way that works for C and that works for C++ regardless of the C++ compiler that is used. (The Java JNI folk figured this one out.) This means that it works between separately-compiled modules of any origin so long as a struct consisting entirely of function-entry pointers (a vtable) is mapped to memory the same by all of them (so they have to be all x86 32-bit, or all x64, for example).
In the DLL that implements the the COM interface via a wrapper class of some sort, you only need a factory entry point. Something like an
extern "C" HRESULT MkIFooImplementation(void **ppv);
which returns an HRESULT (you'll need to learn about those too) and will also return a *pv in a location you provide for receiving the IFoo interface pointer. (I am skimming and there are more careful details that you'll need here. Don't trust my syntax) The actual function stereotype that you use for this is also declared in the *.h file.
The point is that the factory entry, which is always an undecorated extern "C" does all of the necessary wrapper class creation and then delivers an Ifoo interface pointer to the location that you specify. This means that all memory management for creation of the class, and all memory management for finalizing it, etc., will happen in the DLL where you build the wrapper. This is the only place where you have to deal with those details.
When you get an OK result from the factory function, you have been issued an interface pointer and it has already been reserved for you (there is an implicit IFoo:Addref operation already performed on behalf of the interface pointer you were delivered).
When you are done with the interface, you release it with a call on the IFoo:Release method of the interface. It is the final release implementation (in case you made more AddRef'd copies) that will tear down the class and its interface support in the factory DLL. This is what gets you correct reliance on a consistent dynamic stoorage allocation and release behind the interface, whether or not the DLL containing the factory function uses the same libraries as the calling code.
You should probably implement IUnknown:QueryInterface (as method IFoo:QueryInterface) too, even if it always fails. If you want to be more sophisticated with using the COM binary interface model as you have more experience, you can learn to provide full QueryInterface implementations.
This is probably too much information, but I wanted to point out that a lot of the problems you are facing about heterogeneous implementations of DLLs are resolved in the definition of the COM binary interface and even if you don't need all of it, the fact that it provides worked solutions is valuable. In my experience, once you get the hang of this, you will never forget how powerful this can be in C++ and C++ interop situations.
I haven't sketched the resources you might need to consult for examples and what you have to learn in order to make *.h files and to actually implement factory-function wrappers of the libraries you want to share. If you want to dig deeper, holler.
There are other things you need to consider too, such as which run-times are being used by the various libraries. If no objects are being shared that's fine, but that seems quite unlikely at first glance.
Chris Becker's suggestions are pretty accurate - using an actual COM interface may help you get the binary compatibility you need. Your mileage may vary :)
not fun, man. you are in for a lot of frustration, you should probably give this:
Would the only solution be to write a
C-style DLL interface using handlers
to the objects or am I missing
something? In that case, I guess, I
would probably have less effort with
directly using the wrapped C-library.
a really close look. good luck.