I would like some information on the runtime libraries for Visual Studio 2008. Most specifically when should I consider the DLL versions and when should I consider the Static versions.
The Visual Studio documentation delineates the technical differences in terms of DLL dependencies and linked libraries. But I'm left wondering why I should want to use one over the other. More important, why should I want to use the multi-threaded DLL runtime when this obviously forces my application into a DLL dependency, whereas the static runtime has no such requirement on my application user machine.
Linking dynamically to the runtime libraries complicates deployment slightly due to the DLL dependency, but also allows your application to take advantage of updates (bug fixes or more likely performance improvements) to the MS runtime libraries without being recompiled.
Statically linking simplifies deployment, but means that your application must be recompiled against newer versions of the runtime in order to use them.
Larry Osterman feels that you should always use the multi-threaded DLL for application programming. To summarize:
Your app will be smaller
Your app will load faster
Your app will support multiple threads without changing the library dependency
Your app can be split into multiple DLLs more easily (since there will only be one instance of the runtime library loaded)
Your app will automagically stay up to date with security fixes shipped by Microsoft
Please read his whole blog post for full details.
On the downside, you need to redistribute the runtime library, but that's commonly done and you can find documentation on how to include it in your installer.
Dynamically linking the runtime library can give you faster program start up times and smaller system memory usage since the dll can be shared between processes and won't need to be loaded again if it's already used by another process.
I think that main difference is how exceptions will be processed. Microsoft doesn't recommend to link statically to the CRT in a DLL unless the consequences of this are specifically desired and understood:
For example, if you call _set_se_translator in an executable that loads the DLL linked to its own static CRT, any hardware exceptions generated by the code in the DLL will not be caught by the translator, but hardware exceptions generated by code in the main executable will be caught.
Related
I'm trying to get a better grasp on the CRT library options in Visual Studio 2013 (C++ -> Code Generation -> Runtime Libary) and how to know which option to select (and when to change the default).
From MSDN:
A reusable library and all of its users should use the same CRT library types and therefore the same compiler switch.
So, my understanding is that if you are linking with a third party library, you should use the same CRT version that was used to build the library. Whoever built the library should specify what CRT option was used in the build.
Is there a way to determine which CRT version was used just by looking at the .lib file?
More importantly, how would you decide which option to use if you aren't linking with any third-party libraries? When would you consider changing the default?
Short answer:
So, my understanding is that if you are linking with a third party
library, you should use the same CRT version that was used to build
the library. Whoever built the library should specify what CRT option
was used in the build.
That is the least error-prone option. It is possible to mix runtimes, but you can run into unexpected bugs if you do so.
Is there a way to determine which CRT version was used just by looking
at the .lib file?
I don't know about the .lib on its own, but if the third party code has a DLL or EXE, you can see what CRT DLL(s) it depends on using the Windows Dependency Walker tool.
If it's a static library, and your code's CRT choice is mismatched, you'll see warnings when you build.
More importantly, how would you decide which option to use if you
aren't linking with any third-party libraries? When would you consider
changing the default?
For the simplest possible deployment, statically linking is best; you can just ship the executable on its own and it will run. For larger projects, where you have multiple EXEs and DLLs, your code size will be bigger if you statically-link. It would also be preferable to be sharing the same CRT code if you have multiple modules (EXE plus 1 or more of your own DLLs) in the same process.
More detail (based on blog post I wrote previously):
The Library Variants
There are four variants of C/C++ runtime library you can build your code against:
Multi-threaded Debug DLL
Multi-threaded DLL
Multi-threaded Debug
Multi-threaded
You can select which library you’re using by right-clicking on your project in Visual Studio and selecting Properties, clicking the Code Generation option under C/C++ on the dialog that pops up, and going to the Runtime Library property.
Remember that this setting is per-configuration, as you’ll want to choose a Debug runtime library for a Debug configuration, and Release runtime library for a Release configuration.
What’s the difference?
The DLL runtime library options mean that you link dynamically against the C/C++ runtime, and for your program to run, that DLL will need to be somewhere your program can find it (more on that later).
The options not mentioning DLL (Multi-threaded Debug and Multi-threaded Release) cause your program to be statically-linked against the runtime. This means that you don’t need an external DLL for the program to be run, but your program will be bigger because of the extra code, and there are other reasons why you might not want to choose it.
By default, when you create a new project in Visual Studio, it will use the DLL runtime.
The multi-threaded in the runtime names is a legacy of when there used to be both non-thread-safe and multi-threaded C/C++ runtimes. You’ll always be using a multi-threaded runtime with modern Visual Studio, even if your own application is single-threaded.
Deploying DLL Runtimes
If you’re linking against the DLL runtimes, then you’ll have to think about how to deploy them when releasing your program (to Test, and to your customers).
If you deliver Debug builds to your Test team, make sure you provide the Debug variant of the DLL runtimes too.
Also, don’t forget to get the right architecture (e.g. x86 vs x64).
Redistributable Installers
Microsoft provide redistributable packages that install the Release (but not the Debug) DLLs. These can be found easily enough by searching for something like Visual C++ Redistributable 2013 (substituting the Visual Studio version for what you need). You can also go straight to Latest Supported Visual C++ Downloads on Microsoft’s website.
These redistributables packages are executables, and you can call them from your program’s installer (or you could run them manually for setting up a test environment). Note that there are separate redistributables for x86 and x64.
Merge Modules
If you’re building an MSI installer, you can merge the Merge Module for the C/C++ runtime you’re using into your installer package. These merge modules are typically found in C:\Program Files (x86)\Common Files\Merge Modules. For example, the x86 C/C++ runtime’s merge module for Visual Studio 2013 is called Microsoft_VC120_CRT_x86.msm.
Note that the version numbers in these merge module names are not the year-based product versions, but the internal version numbers. This table on Wikipedia shows the mapping between the version numbers for avoidance of confusion.
Unlike the stand-alone executable redistributable installers above, the merge modules also come in Debug variants.
Some versions of Visual Studio have support for a Visual Studio Installer project type (under the Setup and Deployment category in Other Project Types), and if you include the output from your program’s projects in one of these installers, the merge modules for the runtime will be included automatically. You can also use this as a trick to get an installer that will install the Debug runtimes for internal testing purposes (any dummy C/C++ project will do, it doesn’t have to actually install your program).
Copy From Redist Folder
You can also just copy the DLLs from the redist folder in your Visual C++ installation into where your program is installed. For example, for Visual Studio 2013, you’ll find the x64 C/C++ runtime libraries somewhere like C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\redist\x64\Microsoft.VC120.CRT\.
The debug variants can be found somewhere like C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\redist\Debug_NonRedist.
(Adjust paths as appropriate for your Visual Studio version and installation location).
Mixing C/C++ Runtimes
In an ideal world, you would use the same C/C++ runtime library variant (Debug vs Release, DLL vs statically-linked), and all from the same version of Visual Studio, for all libraries being linked into your program.
If you do mix runtimes, you may get linker errors, your program may simply fail to run at all, or worse still, it may seem to be working but crash or give the wrong result only in some cases.
Ignoring Default Libraries
A common scenario is that you only have the Release version of some third-party library, but you still want to be able to build the Debug variant of your own code using this library. Another scenario is that you have a library that uses the static C/C++ runtime and you want the DLL version in your program, or the reverse.
If the third-party library is C code then you’ll probably be able to get away with this and have it actually work, using the /NODEFAULTLIB linker option.
If the library is C++ code, you are probably out of luck as too much of the generated code is tied to symbols in a particular runtime.
These are the different library names:
LIBCMT.LIB: Statically-linked Release runtime (a.k.a. Multi-threaded)
LIBCMTD.LIB: Statically-linked Debug runtime (a.k.a. Multi-threaded Debug)
MSVCRT.LIB: Dynamically-linked Release runtime (a.k.a. Multi-threaded DLL)
MSVCRTD.LIB: Dynamically-linked Debug runtime (a.k.a. Multi-threaded Debug DLL)
Remember, the runtime libraries you want to ignore are the ones that the third-party code is using, i.e. it will be different from the library that your own program is using. If you look at your build output you will be prompted to choose the right one anyway, e.g.:
LINK : warning LNK4098: defaultlib 'LIBCMTD' conflicts with use of other libs; use /NODEFAULTLIB:library
You can specify the library to ignore by right-clicking on your project, selecting Properties, clicking the Input entry under Linker and adding the runtime library name into the entry.
Crossing Module Boundaries
You can also run into C/C++ runtime mismatch problems in more subtle ways across module boundaries (between an EXE and a DLL it loads for example).
For example, data structures in the C library may be defined differently by different runtimes. I’ve seen this cause crashes in programs that used a DLL that made use of a FILE* in its API. A file is opened in one module using one C runtime, and interacted with by another module which has a different, incompatible implementation. Safer options would include passing Windows API HANDLE objects for things like this, or else wrapping up the file interactions in a way that is runtime-agnostic.
Different runtimes also use their own memory heap for allocations. If an object is allocated in one module on one heap, but deallocated in another module, a crash is likely to occur if the C/C++ runtimes are mismatched. This only applies of course to allocations using the C or C++ runtime such as malloc or new. If the default Windows process heap is being used everywhere, for example, everything is fine.
Note that if the modules statically link against the C/C++ runtime, they will have this problem even if it was the same variant of the runtime they linked against, because there will still be two different runtimes with their own memory heaps in play. Therefore, you will want to use the DLL C/C++ runtime in such circumstances.
It would also be wise to use the (same version of) DLL runtimes in each module if runtime-implemented functionality like exceptions or classes with vtables cross the module boundaries, though some combinations of mismatch may work in practice.
I'm working on an application that needs to be compatible up to Windows XP (yea... I know...), my colleagues are arguing that they don't want to use std::string because it might load some dlls that might change the code behavior.
I'm not sure either if they are right or wrong here. At some point, there has to be some common grounds where the application can be loaded.
And so given the context where an application have to be self contained as much as possible, would this application be required to load a dll in order to use the stl or string or else coming from the standard libraries?
Also, assuming I use the -static-libstdc++ flag, by which order of magnitude will the executable be bigger?
In windows, STL is supported using CRT libraries. These libraries require the run time DLLs to be deployed before you running the application. Compiling code with different version of visual studio will create dependency on a particular version of CRT. Compiling code with vs2013 will need a version of CRT much different than the vs2010. So you cannot use/pass STL objects from one version of CRT to another dll consuming a different version. Please go through microsoft articles before consuming the CRT libraries. Below article is for vs2013:
http://msdn.microsoft.com/en-us/library/abx4dbyh.aspx
I would suggest you to use ATL:CString which is much easier to implement & the static version library also much compact than CRT libraries.
I'm working on a cross-platform Firebreath plugin which is crashing on Windows. I use a static library containing classes which reference boost.asio. When I link this library against the plugin dll, I observe crashes when interacting with the io_service subsystem (ie, during socket construction). When I link the static library against an ordinary executable, the problem does not occur. When I compile the contents of the static library directly into the plugin dll project, the crash does not occur. I have gone to great lengths to ensure that all aspects of my build environment on Windows are consistent (build modes, version of Visual Studio, etc). In addition, I've firewalled the boost.asio headers, so that the plugin dll code has no visibility of the boost.asio sub-system (to no effect, unfortunately, vs2008 and vs2010). As far as I can tell, I've done everything possible to ensure that the build environment is well behaved but the problem persists.
Can the community offer any advice on potential risks or approaches which could expose or solve the problem?
Two things that are dramatically different between linking a static library vs loading a DLL:
global initialization: In a DLL, they all run. In a static library, the linker only brings in an object file if it satisfies some unresolved external, so systems which rely on components registering themselves using the constructor or intiialization expression of a global object fail.
shared CRT: In a static library, all calls to the Standard library are resolved during link time, and the main application and library function all call the same copy of the Standard library. In a DLL, you risk having two copies of the Standard library, which can be ok if you're careful never to share any Standard library objects (like FILE*, std::string, or even malloc/free pairs) between library and application.
The second thing is most likely what's biting you. There's a lazy way to fix it: Use the Standard library DLL, and a better way of fixing it: figure out memory and object lifetime and don't try to allocate on one side and free on the other, or share C++ class layout across the boundary. Virtual functions work fine across the boundary.
The advantage of the "better" way is that plugins can be built in any version of the compiler, which becomes a big deal for maintenance later on in the development cycle.
In the FireBreath plugin prep script, try turning on the WITH_DYNAMIC_MSVC_RUNTIME flag.
I'm not using any features of the MSVCRT100.dll (I don't even know if there are new features).
Well, you should be able to statically link it. Your .dll will be way bigger, but would not require msvcrt. This is controlled by Code Generation->Runtime library (choose /MT).
Most applications use the C/C++ Runtimes. You may be using the runtime in a fashion you don't know of yet ... call fopen() somewhere? Then you use it.
However, as it pointed out by BarsMonster, you can link statically to the runtime. Your binary size grows, but you have no external dependencies. In fact this is the method you would choose if don't want to use installer software to deploy your application.
It's almost certainly the best choice for stuff like external libraries that are not bound to a particular application and could be reused several times. If you release your DLL to somebody in a SDK, i'd recommend providing lib's and dll's for both static and dynamic linkage to the runtime.
Keep in mind, however, that static linkage has one serious disadvantage: heap memory is not shared across DLL boundaries then. A memory block must be freed by the module (DLL) which allocated it i the first place. If you can't fulfil this requirement, do not use static linkage. Deploying with the runtime can't be avoided then.
You can use VS2010 and still target older versions of the runtime. This can be configured in your project properties, here's an image:
pic from VC++ blog http://blogs.msdn.com/photos/vcblog/images/9934271/original.aspx
Obviously you still need the toolset you're targeting installed.
For more info you can look at this VC++ team blog post.
Unfortunately, yes. You'll need the VC10 runtime for your platform (x86) or (x64) -- keep in mind though the runtime may change, though it is highly unlikely since VStudio has been in it's final phases for a while now.
It is the core runtime library, you can find out more of your dependencies using DependencyWalker (http://www.dependencywalker.com)
Or alternatively, try it :-)
I have written some code that makes use of an open source library to do some of the heavy lifting. This work was done in linux, with unit tests and cmake to help with porting it to windows. There is a requirement to have it run on both platforms.
I like Linux and I like cmake and I like that I can get visual studios files automatically generated. As it is now, on windows everything will compile and it will link and it will generate the test executables.
However, to get to this point I had to fight with windows for several days, learning all about manifest files and redistributable packages.
As far as my understanding goes:
With VS 2005, Microsoft created Side By Side dlls. The motivation for this is that before, multiple applications would install different versions of the same dll, causing previously installed and working applications to crash (ie "Dll Hell"). Side by Side dlls fix this, as there is now a "manifest file" appended to each executable/dll that specifies which version should be executed.
This is all well and good. Applications should no longer crash mysteriously. However...
Microsoft seems to release a new set of system dlls with every release of Visual Studios. Also, as I mentioned earlier, I am a developer trying to link to a third party library. Often, these things come distributed as a "precompiled dll". Now, what happens when a precompiled dll compiled with one version of visual studios is linked to an application using another version of visual studios?
From what I have read on the internet, bad stuff happens. Luckily, I never got that far - I kept running into the "MSVCR80.dll not found" problem when running the executable and thus began my foray into this whole manifest issue.
I finally came to the conclusion that the only way to get this to work (besides statically linking everything) is that all third party libraries must be compiled using the same version of Visual Studios - ie don't use precompiled dlls - download the source, build a new dll and use that instead.
Is this in fact true? Did I miss something?
Furthermore, if this seems to be the case, then I can't help but think that Microsoft did this on purpose for nefarious reasons.
Not only does it break all precompiled binaries making it unnecessarily difficult to use precompiled binaries, if you happen to work for a software company that makes use of third party proprietary libraries, then whenever they upgrade to the latest version of visual studios - your company must now do the same thing or the code will no longer run.
As an aside, how does linux avoid this? Although I said I preferred developing on it and I understand the mechanics of linking, I haven't maintained any application long enough to run into this sort of low level shared libraries versioning problem.
Finally, to sum up: Is it possible to use precompiled binaries with this new manifest scheme? If it is, what was my mistake? If it isn't, does Microsoft honestly think this makes application development easier?
Update - A more concise question: How does Linux avoid the use of Manifest files?
All components in your application must share the same runtime. When this is not the case, you run into strange problems like asserting on delete statements.
This is the same on all platforms. It is not something Microsoft invented.
You may get around this 'only one runtime' problem by being aware where the runtimes may bite back.
This is mostly in cases where you allocate memory in one module, and free it in another.
a.dll
dllexport void* createBla() { return malloc( 100 ); }
b.dll
void consumeBla() { void* p = createBla(); free( p ); }
When a.dll and b.dll are linked to different rumtimes, this crashes, because the runtime functions implement their own heap.
You can easily avoid this problem by providing a destroyBla function which must be called to free the memory.
There are several points where you may run into problems with the runtime, but most can be avoided by wrapping these constructs.
For reference :
don't allocate/free memory/objects across module boundaries
don't use complex objects in your dll interface. (e.g. std::string, ...)
don't use elaborate C++ mechanisms across dll boundaries. (typeinfo, C++ exceptions, ...)
...
But this is not a problem with manifests.
A manifest contains the version info of the runtime used by the module and gets embedded into the binary (exe/dll) by the linker. When an application is loaded and its dependencies are to be resolved, the loader looks at the manifest information embedded in the exe file and uses the according version of the runtime dlls from the WinSxS folder. You cannot just copy the runtime or other modules to the WinSxS folder. You have to install the runtime offered by Microsoft. There are MSI packages supplied by Microsoft which can be executed when you install your software on a test/end-user machine.
So install your runtime before using your application, and you won't get a 'missing dependency' error.
(Updated to the "How does Linux avoid the use of Manifest files" question)
What is a manifest file?
Manifest files were introduced to place disambiguation information next to an existing executable/dynamic link library or directly embedded into this file.
This is done by specifying the specific version of dlls which are to be loaded when starting the app/loading dependencies.
(There are several other things you can do with manifest files, e.g. some meta-data may be put here)
Why is this done?
The version is not part of the dll name due to historic reasons. So "comctl32.dll" is named this way in all versions of it. (So the comctl32 under Win2k is different from the one in XP or Vista). To specify which version you really want (and have tested against), you place the version information in the "appname.exe.manifest" file (or embed this file/information).
Why was it done this way?
Many programs installed their dlls into the system32 directory on the systemrootdir. This was done to allow bugfixes to shared libraries to be deployed easily for all dependent applications. And in the days of limited memory, shared libraries reduced the memory footprint when several applications used the same libraries.
This concept was abused by many programmers, when they installed all their dlls into this directory; sometimes overwriting newer versions of shared libraries with older ones. Sometimes libraries changed silently in their behaviour, so that dependent applications crashed.
This lead to the approach of "Distribute all dlls in the application directory".
Why was this bad?
When bugs appeared, all dlls scattered in several directories had to be updated. (gdiplus.dll) In other cases this was not even possible (windows components)
The manifest approach
This approach solves all problems above. You can install the dlls in a central place, where the programmer may not interfere. Here the dlls can be updated (by updating the dll in the WinSxS folder) and the loader loads the 'right' dll. (version matching is done by the dll-loader).
Why doesn't Linux have this mechanic?
I have several guesses. (This is really just guessing ...)
Most things are open-source, so recompiling for a bugfix is a non-issue for the target audience
Because there is only one 'runtime' (the gcc runtime), the problem with runtime sharing/library boundaries does not occur so often
Many components use C at the interface level, where these problems just don't occur if done right
The version of libraries are in most cases embedded in the name of its file.
Most applications are statically bound to their libraries, so no dll-hell may occur.
The GCC runtime was kept very ABI stable so that these problems could not occur.
If a third party DLL will allocate memory and you need to free it, you need the same run-time libraries. If the DLL has allocate and deallocate functions, it can be ok.
It the third party DLL uses std containers, such as vector, etc. you could have issues as the layout of the objects may be completely different.
It is possible to get things to work, but there are some limitations. I've run into both of the problems I've listed above.
If a third party DLL allocates memory that you need to free, then the DLL has broken one of the major rules of shipping precompiled DLL's. Exactly for this reason.
If a DLL ships in binary form only, then it should also ship all of the redistributable components that it is linked against and its entry points should isolate the caller from any potential runtime library version issues, such as different allocators. If they follow those rules then you shouldn't suffer. If they don't then you are either going to have pain and suffering or you need to complain to the third party authors.
I finally came to the conclusion that the only way to get this to work (besides statically linking everything) is that all third party libraries must be compiled using the same version of Visual Studios - ie don't use precompiled dlls - download the source, build a new dll and use that instead.
Alternatively (and the solution we have to use where I work) is that if the third-party libraries that you need to use all are built (or available as built) with the same compiler version, you can "just" use that version. It can be a drag to "have to" use VC6, for example, but if there's a library you must use and its source is not available and that's how it comes, your options are sadly limited otherwise.
...as I understand it. :)
(My line of work is not in Windows although we do battle with DLLs on Windows from a user perspective from time to time, however we do have to use specific versions of compilers and get versions of 3rd-party software that are all built with the same compiler. Thankfully all of the vendors tend to stay fairly up-to-date, since they've been doing this sort of support for many years.)