I have a shared DLL that was last compiled in 1997 using Visual Studio 6. We're now using this application and shared DLL on MS Server 2008 and it seems less stable.
I'm assuming if I recompiled using VS 2005 or newer, it would also include improvements in the included Microsoft libraries, right? Is this common to have to recompile for MS bug fixes?
Is there a general practice when it comes to using old compiled code in newer environments?
edit:
I can't really speak from a MS/VS-specific vantage point, but my experiences with other compilers have been the following:
The ABI (i.e. calling conventions or layout of class information) may change between compilers or even compiler versions. So you may get weird crashes if you compile the app and the library with different compiler versions. (That's why there are things like COM or NSObject -- they define a stable way for different modules to talk to each other)
Some OSes change their behaviour depending on the compiler version that generated a binary, or the system libraries it was linked against. That way they can fix bugs without breaking workarounds. If you use a newer compiler or build again with the newer libraries, it is assumed that you test again, so they expect you to notice your workaround is no longer needed and remove it. (This usually applies to the entire application, though, so an older library in a newer app generally gets the new behavior, and its workarounds have already broken.
The new compiler may be better. It may have a better optimizer and generate faster code, it may have bugs fixed, it may support new CPUs.
A new compiler/new libraries may have newer versions of templates and other stub, glue and library code that gets compiled into your application (e.g. C++ template classes). This may be a variant of #3, or of #1 above. E.g. if you have an older implementation of a std::vector that you pass to the newer app, it might crash trying to use parts of it that have changed. Or it might just be less efficient or have less features.
So in general it's a good idea to use a new compiler, but it also means you should be careful and test it thoroughly to make sure you don't miss any changes.
You tagged this with "C++" and "MFC". If you actually have a C++ interface, you really should compile the DLL with the same compiler that you build the clients with. MS doesn't keep the C++ ABI completely stable across compiler versions (especially the standard library), so incompatibilities could lead to subtle errors.
In addition, newer compilers are generally better at optimizing.
If the old dll seems more stable, in my experience, it's only because bugs are obscured better with VC6. (Use of extensive runtime checks with the new version?!)
The main benefit is, that you can debug the dll seamlessly while interacting with the main application. There are other improvements you won't want to miss, e.g. CTime being able to hold dates past year 2037.
Related
We have a lot of prebuilt libraries (via CMake mostly), built using Visual Studio 2017 v141. When we try to use these against a project using Visual STudio 2019 v142 we see errors like:
Error C1047 The object or library file
‘boost_chrono-vc141-mt-gd-x32-1_68.lib’ was created by a different
version of the compiler than other objects...
On the other hand, we also use pre-compiled .libs from 3rd-party vendors which are over a decade old and these have worked just fine when linked against our codebase.
What determines whether a library needs to be rebuilt, and why can some ancient libraries still be used when others that are only one version behind cannot?
ABI incompatibilities could cause some issues. Even though the C++ standard calls for objects such as std::vector and std::mutex and that they need to have specific public/protected members, how these classes are made is left to the implementation.
In practice, it means that nothing prevents the GNU standard library from having their data fields in another orders than the LLVM standard library, or having completely different private members.
As such, if you try to use a function from a library built with the LLVM libc++ by sending it a GNU libstdc++ vector it causes UB. Even on the same standard library, different versions could have changed something and that could be a problem.
To avoid these issues, popular C++ libraries only use C data structures in their ABIs since (at least for now) every compiler produces the same memory layout for a char*, an int or a struct.
These ABI issues can appears in two places:
When you use dynamic libraries (.so and .dll files) your compiler probably won't say anything and you'll get undefined behavior when you call a function of the library using incompatible C++ objects.
When you use static libraries (.a and .lib files) I'm not really sure, I'm guessing it could either print an error if it sees there's gonna be a problem or successfully compile some Frankenstein monster of a binary that will behave like the above point
I will try to answer some integral parts, but be aware this answer could be incomplete. With more information from peers we will maybe be able to construct a full answer!
The simples kind of linking is linking towards a C library. Since there is no concept of classes and overloading function names, the compiler creators are able to create entry points to functions by their pure name. This seems to be pretty much quasi-standardized since, I myself, haven't encountered a pure C library not at least linkable to my projects. You can select this behaviour in C++ code by prepending a function declaration with extern "C". (This also makes it easy to link against a library from C# code) Here is a detailed explanation about extern "C". But as far as I am aware this behaviour is not standardized; it is just so simple - it seems - there is just one sane solution.
Going into C++ we start to encounter function, variable and struct names repeating. Lets just talk about overloaded functions here. For that compiler creators have to come up with some kind of mapping between void a(); void a(int x); void a(char x); ... and their respective library representation. Since this process also is not standardized (see this thread) and this process is far more complex than the 1 to 1 mapping of C, the ABIs of different compilers or even compiler versions can differ in any way.
Now given two compilers (or linkers I couldn't find a resource wich specifies wich one exactly is responsible for the mangling but since this process is not standardized it could be also outsourced to cthulhu) with different name mangling schemes, create following function entry points (simplified):
compiler1
_a_
_a_int_
_a_char_
compiler2
_a_NULL_
_a_++INT++_
_a_++CHAR++_
Different linkers will not understand the output of your particular process; linker1 will try to search for _a_int_ in a library containing only _a_++INT++_. Since linkers can't use fuzzy string comparison (this could lead to a apocalypse imho) it won't find your function in the library. Also don't be fooled by the simplicity of this example: For every feature like namespace, class, method etc. there has to be a method implemented to map a function name to a entry point or memory structure.
Given your example you are lucky you use libraries of the same publisher who coded some logic to detect old libraries. Usually you will get something along the lines of <something> could not be resolved or some other convoluted, irritating and/or unhelpful error message.
Some info and experience dump regarding Visual Studio and libraries in general:
In general the Visual C++ suite doesn't support crosslinked libs between different versions but you could be lucky and it works. Don't rely on it.
Since VC++ 2015 the ABI of the libraries is guaranteed by microsoft to be compatible as drescherjm commented: link to microsoft documentation
In general when using libraries from different suites you should always be cautious as n. 1.8e9-where's-my-share m. commented here (here is your share btw) about dependencies to other libraries and runtimes. In general in general not having the control over how libraries are built is a huge pita
Edit addressing memory layout incompatibilities in addition to Tzigs answer: different name mangling schemes seem to be partially intentional to protect users against linkage against incompatible libraries. This answer goes into detail about it. The relevant passage from gcc docs:
G++ does not do name mangling in the same way as other C++ compilers. This means that object files compiled with one compiler cannot be used with another.
This effect is intentional [...].
Error C1047
This is caused by /GL Global optimization or /LTGC Link Time Code Generation
These use information in the .obj, to perform global optimizations. When present, VS looks at the compiler which generated the original .lib, and if they are different emits the error. These compilation switches are for code from a single compiler, and not intended for cross version usage.
The other builds which work, don't have the switches, so are compatible.
Visual studio has started to use a new #pragma detect_mismatch
This causes an old build to identify it is incompatible with a new build, by detecting the version change.
Very old builds didn't have / support the pragma, so had no checking.
When you build a lib, its dependencies are loaded and satisified by the linker, but this is not a guarantee of working. The one-definition-rule signs the developer up to a contract, that within a compiled binary, all implementations of the same named function are the same. If this came from different compilers, that may not be true, and so the linker can choose any, causing latent bugs, where mixtures of old and new code are linkeded into the binary.
If the definition or implementation of std::string has changed, it may link, but have code which is flawed.
This new compiler check, causes a fail early, which I thoroughly approve of.
I am confused about the binary compatibility of compiled libraries between VS2010 and VS2012. I would like to migrate to VS2012, however many closed-source binary-only SDKs are only out for VS2010, for example SDKs for interfacing hardware devices.
Traditionally, as far as I know Visual Studio was extremely picky about compiler versions, and in VS2010 you could not link to libraries which have been compiled for VS2008.
The reason I'm confused now, is that I'm in the process of migrating to VS2012, and I have tried a few projects, and for my biggest surprise, many of them work cross-versions with no problems.
Note: I'm not talking about the v100 mode, which as far as I know is just a VS2012 GUI over the VS2010 compiling engine.
I'm talking about opening a VS2010 solution in VS2012, click update, and seeing what happens.
When linking to some bigger libraries, like boost, the compiling didn't work, as there are checks for compiler version and they raise an error and abort compilation. Some other libraries just abort at missing functions. This is the behaviour what I was expecting.
On the other hand, many libraries work fine with no errors or additional warnings.
How is it possible? Was VS2012 made in a special way to keep binary compatibility with VS2010 libraries? Does it depend on dynamic vs. static linking?
And the most important question: even though there is no error raised at compile time, can I trust the compiler that there won't be any bugs when linking a VS2012 project to VS2010 compiled libraries?
“Many libraries work fine. How is it possible?”
1) the lib is compiled to use static RTL, so the code won't pull in a second RTL DLL that clashes.
2) the code only calls functions (and uses structures, etc.) that is completely in the header files
so doesn't cause a linker error, or
calls functions that are still present in the new RTL so doesn't cause a linker error,
3) doesn't call anything with structures that have changed layout or meaning so it doesn't crash.
#3 is the one to worry about. You can use imports to see what it uses and make a complete list, but there is no documentation or guarantee as to which ones are compatible. Just because it seems to run doesn't mean it doesn't have a latent bug.
There is also the possibility of
4) the driver SDK or other code that's fairly low level was written to avoid using standard library calls completely.
also (not your situation I think) DLLs can be isolated and have their own RTL and not pass stuff back and forth (like memory allocation and freeing) between different regimes. In-proc COM servers work this way. DLLs can do this in general if you're careful about what you pass and return and what you do with things like pointers. Crypto++ for example is initialized with the memory routines from the enclosing program and doesn't expose malloc'ed memory from the RTL version it was compiled with.
The question says it all.
I understand that VC11 is currently only in beta, but what I'm asking is:
experience with trying to link with a closed source (widely used if possible) library compiled with vc10
specifications from Microsoft saying explicitely if yes or no the vc11 will be able to link with vc10 libraries.
I'm talking about C++ case only.
You may want to read this answer for the case of dynamic linking.
Regarding static linking, I think you can't safely link C++ libraries written with VCx with code compiled with VCy. For example, STL containers implementations change from version to version (and even within the same version, there are changes between debug and release mode, and settings like _HAS_ITERATOR_DEBUGGING, etc.).
Quoting VC++ STL maintainer:
The STL never has and never will guarantee binary compatibility
between different major versions. We're enforcing this with linker
errors when mixing object files/static libraries compiled with
different major versions that are both VC10+ [...]
That's a resounding no! Every major release of VS has a new version of the dynamic CRT, names are msvcr90.dll for VS2008, msvcr100.dll for VS2010, msvcr110.dll for VS11.
Using the dynamic CRT (/MD compile option) is important when you return C++ objects like std::string from an exported function, or otherwise return any pointer that needs to be deleted by the client code. That can only work properly when the client code is using the exact same version of the CRT as the DLL. Implicit is that this won't be the case when these chunks of code each have their own dependency on a msvcrXXX.dll version, they'll inevitably have incompatible CRT versions that don't share the same heap allocator.
You can write DLLs that are safe to use with any CRT version but that requires carefully crafting the API so that these dependencies do not exist. The COM Automation model is an example of that.
For dynamic libraries, there should be no problem, as they follow well-defined ABIs. You can link to dll's from any compiler, any time.
Static libraries are trickier. As far as I know, Microsoft has never guaranteed cross-compiler compatibility for those. In particular, features such as link-time code generation have been known to break compatibility between earlier releases. .lib files do not have a single well-defined format like DLLs do.
It might work, because Microsoft rarely breaks compatibility unless they have to, but as far as I know, it is not guaranteed.
Of course, if the actual functions and types exposed by the DLLs don't match up, you'll run into problems.
In VC11, the sizes of almost all standard library data structures have been changed (Microsoft finally employs the empty base class optimization, effectively reducing the size of all containers which use the default allocator.), so trying to pass a std::string from a DLL compiled with VC10 into a module compiled by VC11 will certainly break.
I don't see any reason why they could be incompatible. No matter what C++ compiler you have used to produce LIB files as soon as they follow format specification. You could check this question if you are interested in details of format.
Our project uses VC++9 with VS2008, and we want to make the switch to VC++10 with VS2010 to use the new features. Unfortunately, some of our dependencies were built with VC++9, and recompiling them with VC++10 is not possible at the moment for various reasons. Since we really want to make the switch, is there was a way to simply link with those libraries, or is there no compatibility between VC++10 and VC++9 binaries?
EDIT: The actual dependencies are BWAPI and BWTA. In the case of BWAPI, it's not a problem, but BWTA depends in CGAL, and that's what's giving us trouble. Trying to link with it yields a bunch of linking errors.
In general you are out of luck unless the dependencies are COM modules or dlls that export only "pure" C functions.
Visual Studio releases are allowed to break ABI compatibility. This means the exported and internal signature of C++ classes is different, and passing for example a std::string from a binary compiled with one version to a binary compiled with a different version might not have the expected result. In short: do not rely on this working. If it does, you're lucky, but in "undefined behavior" territory at the runtime level. Just fix your code to build with VS2010. It's probably broken to start with.
well in the case of a 3rd party lib that you cannot change, the typical answer is to wrap them with a simple dll that is built with VC2008 and calls the 3rd party for you. You then have control over what is exposed, so you can fall back to a 'standardised' mechanism that works with both linkers. This is almost always C function calls as C is very standardised.
The problem is MS changing the ABI of compiled C++, and I guess with the standards committee not providing a standard way of calling C++ binaries.
Looking at GCAL this doesn't seem to be a good answer for you, the best you can do in such cases is to contact GCAL and wait for a rebuilt binary.
But I just checked - its open source, rebuild it yourself. Not only that, it already supports VS2010 so rebuild should be easy.
I know that if I link my c++ program to a dynamic library (DLL) that was built with a different version of Visual Studio, it won't work because of the binary compatibility issue.
(I have experienced this with Boost library and VS 2005 and 2008)
But my question is: is this true for all versions of MSVS? Does this apply to static libraries(LIB) as well? Is this an issue with GCC & Linux as well? and finally how about linking in VS to a DLL built with MinGW?
By the way aside from cross-platform or cross-compiler, why can't two version of the same compiler(VS) be compatibile?
Hi. I know that if I link my c++ program to a dynamic library (DLL) that was built with a different version of Visual Studio, it won't work because of the binary compatibility issue. (I have experienced this with Boost library and VS 2005 and 2008)
I do not remember ever seeing MS changing the ABI, so technically different versions of the compiler will produce the same output (given the same flags (see below)).
Therefore I don't think this is not incompatibilities in Dev Studio but changes in Boost.
Different versions of boost are not backwards compatible (in binary, source they are backward compatible).
But my question is: is this true for all versions of MSVS?
I don't believe there is a problem. Now if you use different flags you can make the object files incompatible. This is why debug/release binaries are built into separate directories and linked against different versions of the standard run-time.
Does this apply to static libraries(LIB) as well?
You must link against the correct static library. But once the static library is in your code it is stuck there all resolved names will not be re-resolved at a later date.
Is this an issue with GCC & Linux as well?
Yes. GCC has broken backwards compatability in the ABI a couple of times (several on purpose (some by mistake)). This is not a real issue as on Linux code is usually distributed as source and you compile it on your platform and it will work.
and finally how about linking in VS to a DLL built with MinGW?
Sorry I don't know.
By the way aside from cross-platform or cross-compiler, why can't two version of the same compiler(VS) be compatibile?
Well fully optimized code objects may be compressed more thus alignment is different. Other compiler flags may affect the way code is generated that is incompatible with other binary objects (changing the way functions are called (all parameters on the stack or some parameters in registers)). Technically only objects compiled with exactly the same flags should be linked together (technically it is a bit looser than that as a lot of flags don't affect the binary compatibility).
Note some libraries are released with multiple versions of the same library that are compiled in different ways. You usually distinguish the library by the extension on the end. At my last job we used the following convention.
libASR.dll // A Sincgle threaded Relase version lib
libASD.dll // A Single threaded Debug version
libAMR.dll // A Multi threaded Release version
libAMD.dll // A Multi threaded Debug version
If properly built, DLLs should be programming-language and version neutral. You can link to DLLs built with VB, C, C++, etc.
You can use dependency walker to examine the exported functions in the dll.
To answer part of your question, GCC/Linux does not have this problem. At least, not as often. libstdc++ and glibc are the standard C++/C libraries on GNU systems, and the authors of those libraries go to efforts to avoid breaking compatibility. glibc is pretty much always backward compatible, but libstdc++ has broken ABI several times in the past and probably will again in the future.
It is very difficult to write stable ABIs in C++ compared to C, because the automatic features in C++ take away some of the control you need to maintain an ABI. Especially once you get into templates and inline functions, where some of the code gets embedded in your application rather than staying contained in the shared library. That means that the object's structure can't ever change without requiring a recompilation of the application.
In practice, it isn't a huge deal on Windows. It would be fantastic if Microsoft just made the MSI installer know how to grab Microsoft-provided DLLs from Windows Update when an app is installed that needs them, but simply adding the redistributable to an InnoSetup-generated installer works well enough.