I have VS2010 and i need to build application.
Also i have .dll with .lib and .h built with VS2005.
This library depend on log4cxx.dll (i built 2010 and downloaded 2005 binary).
When i call library interface method which return reference to built object it throw AV exception. I can't build my app with another version and i've already tried change to Multithreaded Debug my app type.
It is likely that object you are getting has another memory layout.
If you are passing c++ object across runtime boundaries you should be sure that recieving object has the same layout. For example if VS2005 compiler have reordered it fields for optimization and VS2010 done it other way. Or one of classes you used (eg std::string) changed between versions. Read how objects are returned from COM methods.
There is also problem with object allocation in one runtime and deallocation in another...
As a solution you can try installing VS2005, but there is no guarantees that you end up same
Related
Recently I have been using the Assimp library for my hobby 3D graphics engine project in order to load 3D models.
The library comes in the form of a DLL accompanied by an import library (LIB) and an EXP file (only when building from latest sources; I understand it holds information about the exported functions, similar to the LIB)
Now, the surprise part, which made me ask this question, is the following:
I was able to correctly, without any build/link errors or run-time errors use the C++ interface of two different versions of the library:
The pre-built library, release 3.1.1, which was built and is dependent on an older runtime (MSVCP110.DLL, MSVCR110.DLL) (which I later decided to change to my own build since there was a linking error in that binary build)
My own source build made with the VS 2013 compiler, depending on MSVCP120.DLL and VCRUNTIME120.DLL
I was able to use both of these library binaries with my executable binary, when built with either:
VS 2013 compiler
VS 2015 compiler
My surprise of the success of using the library like described above, without any error, is caused by:
reading all over the internet about how C++ libraries and interfaces are not binary compatible (ABI) between different compiler versions, and thus cannot be used together with binaries built with other compilers. Exceptions: usage through C interfaces, which maintain the ABI compatibility; COM libraries (COM interfaces were built to overcome the same issue).
Using binaries built with different compilers can raise problems because of function name mangling (may be solved by the import library?!), dynamic memory allocation and freeing done by different versions of the allocators/destructors.
Can anyone provide some insight on this use case, on why I successfully managed to use a VS 2013 built dynamic library together with a VS 2015 toolset application without any of the above described problems? Also, the same applies when using the pre-built binary both with a VS 2013 build and a VS 2015 build of my 3D engine application.
Is it an isolated case or does the Assimp library itself take care of the problems and is implemented in such a way to ensure compatibility?
The actual usage of the library was pretty straightforward: I used a local stack variable of an Importer object (class) and from there on manipulated and read information from a bunch of structs; no other classes (I think)
As long as all the classes and memory used in the DLL is allocated and freed and maintained inside the class, and you don't use standard library container objects across the interface boundary you are completely safe to do it.
Even the use of std:string isn't allowed across DLL boundaries if the modules use different compilers or even have a different CRT usage, or at least a different CRT.
In fact, using plain interfaces (pure virtual classes) is safe for all VS C++ compilers. Also using extern "C" is safe.
So if the structures you exchange are just PODs or have simple plain data objects, and as long as allocation and destruction is done all inside the DLL you are free to mix and use DLLs build with different compilers.
So the best examples are the DLLs of the windows OS. They use a clearly defined interface of simple data structures. Memory management and allocation (like windows, menus, GDI objects) is done via a transparent handle. The DLLs are just blackboxes for you.
Hope that I met all points.
The OpenSSL 1.0.2g package's INSTALL.W32 documentation has the following warning text:
One final comment about compiling applications linked to the OpenSSL library.
If you don't use the multithreaded DLL runtime library (/MD option) your
program will almost certainly crash because malloc gets confused -- the
OpenSSL DLLs are statically linked to one version, the application must
not use a different one.
I don't fully understand this or the repercussions of it. Are they saying that statically linking libeay32mt.lib is not supported?
We are experiencing random crashes in our app and the stack trace sometimes point to free calls in openssl functions, would that be the expected symptom that this warning refers to?
C and C++ memory management is implemented in the CRT (C Runtime Library). Each physical copy of the CRT mapped into a process (either statically compiled into a module, or referenced DLL) uses a distinct heap. Memory allocations and deallocations must be performed on the same heap, i.e. the same physical CRT copy (see Potential Errors Passing CRT Objects Across DLL Boundaries for more details).
In your specific scenario you need to make sure to do the following:
Dynamically link your application against the OpenSSL DLLs.
Dynamically link your application against the CRT.
Verify that both your application and the OpenSSL DLLs link against identical versions of the CRT (version and configuration).
While it is possible to statically link against OpenSSL, if you can ensure that your final binary contains a single CRT implementation and doesn't (directly or indirectly) dynamically link against the CRT as well. This is difficult to maintain, and breaks as soon as you link against a library, that doesn't provide a static library.
I am confused about the binary compatibility of compiled libraries between VS2010 and VS2012. I would like to migrate to VS2012, however many closed-source binary-only SDKs are only out for VS2010, for example SDKs for interfacing hardware devices.
Traditionally, as far as I know Visual Studio was extremely picky about compiler versions, and in VS2010 you could not link to libraries which have been compiled for VS2008.
The reason I'm confused now, is that I'm in the process of migrating to VS2012, and I have tried a few projects, and for my biggest surprise, many of them work cross-versions with no problems.
Note: I'm not talking about the v100 mode, which as far as I know is just a VS2012 GUI over the VS2010 compiling engine.
I'm talking about opening a VS2010 solution in VS2012, click update, and seeing what happens.
When linking to some bigger libraries, like boost, the compiling didn't work, as there are checks for compiler version and they raise an error and abort compilation. Some other libraries just abort at missing functions. This is the behaviour what I was expecting.
On the other hand, many libraries work fine with no errors or additional warnings.
How is it possible? Was VS2012 made in a special way to keep binary compatibility with VS2010 libraries? Does it depend on dynamic vs. static linking?
And the most important question: even though there is no error raised at compile time, can I trust the compiler that there won't be any bugs when linking a VS2012 project to VS2010 compiled libraries?
“Many libraries work fine. How is it possible?”
1) the lib is compiled to use static RTL, so the code won't pull in a second RTL DLL that clashes.
2) the code only calls functions (and uses structures, etc.) that is completely in the header files
so doesn't cause a linker error, or
calls functions that are still present in the new RTL so doesn't cause a linker error,
3) doesn't call anything with structures that have changed layout or meaning so it doesn't crash.
#3 is the one to worry about. You can use imports to see what it uses and make a complete list, but there is no documentation or guarantee as to which ones are compatible. Just because it seems to run doesn't mean it doesn't have a latent bug.
There is also the possibility of
4) the driver SDK or other code that's fairly low level was written to avoid using standard library calls completely.
also (not your situation I think) DLLs can be isolated and have their own RTL and not pass stuff back and forth (like memory allocation and freeing) between different regimes. In-proc COM servers work this way. DLLs can do this in general if you're careful about what you pass and return and what you do with things like pointers. Crypto++ for example is initialized with the memory routines from the enclosing program and doesn't expose malloc'ed memory from the RTL version it was compiled with.
I'm working on a cross-platform Firebreath plugin which is crashing on Windows. I use a static library containing classes which reference boost.asio. When I link this library against the plugin dll, I observe crashes when interacting with the io_service subsystem (ie, during socket construction). When I link the static library against an ordinary executable, the problem does not occur. When I compile the contents of the static library directly into the plugin dll project, the crash does not occur. I have gone to great lengths to ensure that all aspects of my build environment on Windows are consistent (build modes, version of Visual Studio, etc). In addition, I've firewalled the boost.asio headers, so that the plugin dll code has no visibility of the boost.asio sub-system (to no effect, unfortunately, vs2008 and vs2010). As far as I can tell, I've done everything possible to ensure that the build environment is well behaved but the problem persists.
Can the community offer any advice on potential risks or approaches which could expose or solve the problem?
Two things that are dramatically different between linking a static library vs loading a DLL:
global initialization: In a DLL, they all run. In a static library, the linker only brings in an object file if it satisfies some unresolved external, so systems which rely on components registering themselves using the constructor or intiialization expression of a global object fail.
shared CRT: In a static library, all calls to the Standard library are resolved during link time, and the main application and library function all call the same copy of the Standard library. In a DLL, you risk having two copies of the Standard library, which can be ok if you're careful never to share any Standard library objects (like FILE*, std::string, or even malloc/free pairs) between library and application.
The second thing is most likely what's biting you. There's a lazy way to fix it: Use the Standard library DLL, and a better way of fixing it: figure out memory and object lifetime and don't try to allocate on one side and free on the other, or share C++ class layout across the boundary. Virtual functions work fine across the boundary.
The advantage of the "better" way is that plugins can be built in any version of the compiler, which becomes a big deal for maintenance later on in the development cycle.
In the FireBreath plugin prep script, try turning on the WITH_DYNAMIC_MSVC_RUNTIME flag.
Due to how Microsoft implements the heap in their non-DLL versions of the runtime, returning a C++ object from a DLL can cause problems:
// dll.h
DLL_EXPORT std::string somefunc();
and:
// app.c - not part of DLL but in the main executable
void doit()
{
std::string str(somefunc());
}
The above code runs fine provided both the DLL and the EXE are built with the Multi-threaded DLL runtime library.
But if the DLL and EXE are built without the DLL runtime library (either the single or multi-threaded versions), the code above fails (with a debug runtime, the code aborts immediately due to the assertion _CrtIsValidHeapPointer(pUserData) failing; with a non-debug runtime the heap gets corrupted and the program eventually fails elsewhere).
Two questions:
Is there a way to solve this other then requiring that all code use the DLL runtime?
For people who distribute their libraries to third parties, how do you handle this? Do you not use C++ objects in your API? Do you require users of your library to use the DLL runtime? Something else?
Is there a way to solve this other then requiring that all code use the DLL runtime?
Not that I know of.
For people who distribute their libraries to third parties, how do you handle this? Do you not use C++ objects in your API? Do you require users of your library to use the DLL runtime? Something else?
In the past I distributed an SDK w/ dlls but it was COM based. With COM all the marshalling of parameters and IPC is done for you automatically. Users can also integrate in with any language that way.
Your code has two potential problems: you addressed the first one - CRT runtime. You have another problem here: the std::string could change among VC++ versions. In fact, it did change in the past.
The safe way to deal with is to export only C basic types. And exports both create and release functions from the DLL. Instead of export a std::string, export a pointer.
__declspec(export) void* createObject()
{
std::string* p = __impl_createObject();
return (void*)p;
}
__declspec(export) void releasePSTRING(void* pObj)
{
delete ((std::string*)(pObj));
}
There is a way to deal with this, but it's somewhat non-trivial. Like most of the rest of the library, std::string doesn't allocate memory directly with new -- instead, it uses an allocator (std::allocator<char>, by default).
You can provide your own allocator that uses your own heap allocation routines that are common to the DLL and the executable, such as by using HeapAlloc to obtain memory, and suballocate blocks from there.
If you have a DLL that you want to distribute and you don't want to bind your callers to a specific version to the C-Runtime, do any of the following:
I. Link the DLL to the static version of the C-Runtime library. From the Visual Studio Project Properties page, select the tab for Configuration Properties-> C/C++ -> Code Generation. These an option for selecting the "Runtime library". Select "Multithread" or "Multithreaded Debug" instead of the DLL versions. (Command line equilvalent is /MT or /MTd)
There are a couple of different drawbacks to this approach:
a. If Microsoft ever releases a security patch of the CRT, your shipped components may be vulnerable until your recompile and redist your binary.
b. Heap pointers allocated by "malloc" or "new" in the DLL can not be "free"d or "delete"d by the EXE or other binary. You'll crash otherwise. The same holds true for FILE handles created by fopen. You can't call fopen in teh DLL and expect the EXE to be able to fclose on it. Again, crash if you do. You will need to build the interface to your DLL to be resilient to all of these issues. For starters, a function that returns an instance to std::string is likely going to be an issue. Provide functions exported by your DLL to handle the releasing of resources as needed.
Other options:
II. Ship with no c-runtime depedency. This is a bit harder. You first have to remove all calls to the CRT out of your code, provide some stub functions to get the DLL to link, and specify "no default libraries" linking option. Can be done.
III. C++ classes can be exported cleanly from a DLL by using COM interface pointers. You'll still need to address the issues in 1a above, but ATL classes are a great way to get the overhead of COM out of the way.
The simple fact here is, Microsofts implementation aside, C++ is NOT an ABI. You cannot export C++ objects, on any platform, from a dynamic module, and expect them to work with a different compiler or language.
Exporting c++ classes from Dlls is a largely pointless excercise - because of the name mangling, and lack of support in c++ for dynamically loaded classes - the dlls have to be loaded statically - so you loose the biggest benefit of splitting a project into dlls - the ability to only load functionality as needed.