Advantages of msvcr100 over msvcrt - c++

I'd like to ask whether or not there is an advantage in msvcr100 over msvcrt and what would the advantages be.
When compiling with the msvc compiler, the executable I get is linked to msvcr100, so it requires MS Visual C++ Redistributable to be installed. If I compile it with g++ (mingw), then there's no such requirement. I'm guessing it's linked to msvcrt, instead.
I prefer to keep dependencies at a minimum, so I want to know if there's any point in using a compiler that links to mscvr100.
Thank you.

Msvcrt.dll is a private DLL, intended to be used only by Windows itself. Different versions of Windows have different versions of msvcrt.dll.
You'll have a serious headache when you discover that the way you use the CRT causes failure on a particular Windows version. Including the kind of failure that requires a time machine, a new version of Windows may have a new copy of msvcrt.dll that makes your program fail. A problem otherwise known as DLL Hell.
The advantage of using msvcr100.dll is that the odds for to happen are much smaller. You work with a known version of the CRT. Even if there's a breaking change in Windows itself that breaks msvcr100.dll then there's still a solution: you can update it. That is not possible with msvcr.dll, it is a DLL that's covered by the File System Protection feature. Overwriting it with an installer would normally be pretty disastrous because that can break Windows itself. But it cannot cause failure, Windows automatically restores it. Also the reason you should not follow Voith's advice.

If you use an MS compiler later than version 6, then you will have to link to the runtime specific to that compiler. You do not have any choice in the matter. Since the MSVC runtimes are not system DLLs, you will need to distribute them with your application.
If you use MSVC6, or a compiler that can link against msvcrt.dll, then you can link against msvcrt.dll.
The mingw compiler is quite configurable. However, I believe that it usually will link against msvcrt.dll. Since msvcrt.dll is a system DLL (since Windows 2000 IIRC) you do not need to distribute it.
I'm assuming in all of this that you link to the runtime dynamically. That is the preferred option, but it is always possible to link to the runtime statically. When you do that, you make your application standalone.
It all boils down to which compiler you prefer to use. If you prefer to use a modern MSVC, then you'll need to accept the runtime distribution, or link statically.

Related

Is my application loading a dll to use std::string?

I'm working on an application that needs to be compatible up to Windows XP (yea... I know...), my colleagues are arguing that they don't want to use std::string because it might load some dlls that might change the code behavior.
I'm not sure either if they are right or wrong here. At some point, there has to be some common grounds where the application can be loaded.
And so given the context where an application have to be self contained as much as possible, would this application be required to load a dll in order to use the stl or string or else coming from the standard libraries?
Also, assuming I use the -static-libstdc++ flag, by which order of magnitude will the executable be bigger?
In windows, STL is supported using CRT libraries. These libraries require the run time DLLs to be deployed before you running the application. Compiling code with different version of visual studio will create dependency on a particular version of CRT. Compiling code with vs2013 will need a version of CRT much different than the vs2010. So you cannot use/pass STL objects from one version of CRT to another dll consuming a different version. Please go through microsoft articles before consuming the CRT libraries. Below article is for vs2013:
http://msdn.microsoft.com/en-us/library/abx4dbyh.aspx
I would suggest you to use ATL:CString which is much easier to implement & the static version library also much compact than CRT libraries.

C++ Dynamic Library Compiling/Linking

I know that if I link my c++ program to a dynamic library (DLL) that was built with a different version of Visual Studio, it won't work because of the binary compatibility issue.
(I have experienced this with Boost library and VS 2005 and 2008)
But my question is: is this true for all versions of MSVS? Does this apply to static libraries(LIB) as well? Is this an issue with GCC & Linux as well? and finally how about linking in VS to a DLL built with MinGW?
By the way aside from cross-platform or cross-compiler, why can't two version of the same compiler(VS) be compatibile?
Hi. I know that if I link my c++ program to a dynamic library (DLL) that was built with a different version of Visual Studio, it won't work because of the binary compatibility issue. (I have experienced this with Boost library and VS 2005 and 2008)
I do not remember ever seeing MS changing the ABI, so technically different versions of the compiler will produce the same output (given the same flags (see below)).
Therefore I don't think this is not incompatibilities in Dev Studio but changes in Boost.
Different versions of boost are not backwards compatible (in binary, source they are backward compatible).
But my question is: is this true for all versions of MSVS?
I don't believe there is a problem. Now if you use different flags you can make the object files incompatible. This is why debug/release binaries are built into separate directories and linked against different versions of the standard run-time.
Does this apply to static libraries(LIB) as well?
You must link against the correct static library. But once the static library is in your code it is stuck there all resolved names will not be re-resolved at a later date.
Is this an issue with GCC & Linux as well?
Yes. GCC has broken backwards compatability in the ABI a couple of times (several on purpose (some by mistake)). This is not a real issue as on Linux code is usually distributed as source and you compile it on your platform and it will work.
and finally how about linking in VS to a DLL built with MinGW?
Sorry I don't know.
By the way aside from cross-platform or cross-compiler, why can't two version of the same compiler(VS) be compatibile?
Well fully optimized code objects may be compressed more thus alignment is different. Other compiler flags may affect the way code is generated that is incompatible with other binary objects (changing the way functions are called (all parameters on the stack or some parameters in registers)). Technically only objects compiled with exactly the same flags should be linked together (technically it is a bit looser than that as a lot of flags don't affect the binary compatibility).
Note some libraries are released with multiple versions of the same library that are compiled in different ways. You usually distinguish the library by the extension on the end. At my last job we used the following convention.
libASR.dll // A Sincgle threaded Relase version lib
libASD.dll // A Single threaded Debug version
libAMR.dll // A Multi threaded Release version
libAMD.dll // A Multi threaded Debug version
If properly built, DLLs should be programming-language and version neutral. You can link to DLLs built with VB, C, C++, etc.
You can use dependency walker to examine the exported functions in the dll.
To answer part of your question, GCC/Linux does not have this problem. At least, not as often. libstdc++ and glibc are the standard C++/C libraries on GNU systems, and the authors of those libraries go to efforts to avoid breaking compatibility. glibc is pretty much always backward compatible, but libstdc++ has broken ABI several times in the past and probably will again in the future.
It is very difficult to write stable ABIs in C++ compared to C, because the automatic features in C++ take away some of the control you need to maintain an ABI. Especially once you get into templates and inline functions, where some of the code gets embedded in your application rather than staying contained in the shared library. That means that the object's structure can't ever change without requiring a recompilation of the application.
In practice, it isn't a huge deal on Windows. It would be fantastic if Microsoft just made the MSI installer know how to grab Microsoft-provided DLLs from Windows Update when an app is installed that needs them, but simply adding the redistributable to an InnoSetup-generated installer works well enough.

Standalone VS 2010 C++ Program

it's been a long while since I've used VS 2010 and C++, and as I'm getting back to using it, I'm running into the same problems that plagued me last year: the exe's that I compile do not run well on older machines that do not have the correct C++ runtimes. I do not even know what link to give them, and I told them to install this after they had an error that said "The program can't start because MSVCR100.dll is missing from your computer. Try Reinstalling the program to fix this problem. Click OK to close the application."). So I went in and set the code generation to /MT and disabled quite a few options, and tried messing around with lots of options, but still the same result.
My question is: Is there a list of complete VS 2010 C++ distributables that I can just give and tell them to install so all of the C++ programs I compile on my VS 2010 will work on Windows XP, or even better, a way to general a standalone exe that contains all it needs to work, and does not rely on DLLs? I'm thinking like linking to a library that has everything the exe references. If it helps, I'm building for both x64 and x86.
P.S. What's up with the manifest file, should I include it or not?
The easiest thing to do is to just install the VC++ Redistributable Package. It has both x86 and x64 versions.
Firstly, before I actually give you the detail:
Warning
If you do this, things will be bad for two reasons:
If there are security or other bugs in the MSVC runtimes, and you take this approach, they're baked into your app which means you need to re-distribute. DLLs are preferred because theoretically people use system update which means any errors get fixed.
Everything else you compile into your exe also needs to do this. If you don't, you end up with two versions of the code and whatever you're using won't link.
One possible solution is to bake the MSVC runtime into your application, by using the cl.exe option (C/C++ compiler settings) /MT which means multi-threaded version of the C/C++ runtime linked statically. As I said, if you try to link against something that is linked itself dynamically to the runtime, you're going to end up in a mess. Also, as I said, this represents an additional security risk factor, so bear that in mind.
The other options are to write an installer that can either download the appropriate runtime, or include the DLL needed.
If you're using some feature of the runtime that exceeds a certain version of Windows (generic statement, but it does happen) then you should be able to use the Windows SDK to target various versions of Windows using appropriate C runtimes.
http://www.microsoft.com/downloads/en/details.aspx?FamilyID=a7b7a05e-6de6-4d3a-a423-37bf0912db84
google text: visual studio c++ redist
Do not statically link to the runtime; specifically don't do so if you're using any kind of dll for other purposes. It introduces all kinds of bogus problems wrt heap management that you probably don't want to mess with.
Open the properties dialog for your project and select Configuration Properties | C/C++ | Code Generation. The default setting is Multi-threaded DLL. Change that to Multi-threaded and you'll be building and .EXE with the run-time statically linked in. Don't forget to do the same for the debug version.
If you're using MFC or ATL, you will need to navigate to Configuration Properties | General and set "Use of MFC" or "Use of ATL" to link statically as well.
NB: If you link the runtime statically, you must make sure that any other library you're linking in also links it in statically. Otherwise you'll wind up with two copies of the runtime in memory, each with its own heap and bad things will happen when code using one runtime tries to free an object allocated by the other runtime.
This previous answer should hold true for VS2010. I still build with VS2005, but all my apps use the static CRT for the sole reason of being able to run across old and newer machines alike.

If I Develop a C++ (native) DLL with VS2010 will I need MSVCRT100.dll to be also deployed?

I'm not using any features of the MSVCRT100.dll (I don't even know if there are new features).
Well, you should be able to statically link it. Your .dll will be way bigger, but would not require msvcrt. This is controlled by Code Generation->Runtime library (choose /MT).
Most applications use the C/C++ Runtimes. You may be using the runtime in a fashion you don't know of yet ... call fopen() somewhere? Then you use it.
However, as it pointed out by BarsMonster, you can link statically to the runtime. Your binary size grows, but you have no external dependencies. In fact this is the method you would choose if don't want to use installer software to deploy your application.
It's almost certainly the best choice for stuff like external libraries that are not bound to a particular application and could be reused several times. If you release your DLL to somebody in a SDK, i'd recommend providing lib's and dll's for both static and dynamic linkage to the runtime.
Keep in mind, however, that static linkage has one serious disadvantage: heap memory is not shared across DLL boundaries then. A memory block must be freed by the module (DLL) which allocated it i the first place. If you can't fulfil this requirement, do not use static linkage. Deploying with the runtime can't be avoided then.
You can use VS2010 and still target older versions of the runtime. This can be configured in your project properties, here's an image:
pic from VC++ blog http://blogs.msdn.com/photos/vcblog/images/9934271/original.aspx
Obviously you still need the toolset you're targeting installed.
For more info you can look at this VC++ team blog post.
Unfortunately, yes. You'll need the VC10 runtime for your platform (x86) or (x64) -- keep in mind though the runtime may change, though it is highly unlikely since VStudio has been in it's final phases for a while now.
It is the core runtime library, you can find out more of your dependencies using DependencyWalker (http://www.dependencywalker.com)
Or alternatively, try it :-)

Working with Visual Studios C++ manifest files

I have written some code that makes use of an open source library to do some of the heavy lifting. This work was done in linux, with unit tests and cmake to help with porting it to windows. There is a requirement to have it run on both platforms.
I like Linux and I like cmake and I like that I can get visual studios files automatically generated. As it is now, on windows everything will compile and it will link and it will generate the test executables.
However, to get to this point I had to fight with windows for several days, learning all about manifest files and redistributable packages.
As far as my understanding goes:
With VS 2005, Microsoft created Side By Side dlls. The motivation for this is that before, multiple applications would install different versions of the same dll, causing previously installed and working applications to crash (ie "Dll Hell"). Side by Side dlls fix this, as there is now a "manifest file" appended to each executable/dll that specifies which version should be executed.
This is all well and good. Applications should no longer crash mysteriously. However...
Microsoft seems to release a new set of system dlls with every release of Visual Studios. Also, as I mentioned earlier, I am a developer trying to link to a third party library. Often, these things come distributed as a "precompiled dll". Now, what happens when a precompiled dll compiled with one version of visual studios is linked to an application using another version of visual studios?
From what I have read on the internet, bad stuff happens. Luckily, I never got that far - I kept running into the "MSVCR80.dll not found" problem when running the executable and thus began my foray into this whole manifest issue.
I finally came to the conclusion that the only way to get this to work (besides statically linking everything) is that all third party libraries must be compiled using the same version of Visual Studios - ie don't use precompiled dlls - download the source, build a new dll and use that instead.
Is this in fact true? Did I miss something?
Furthermore, if this seems to be the case, then I can't help but think that Microsoft did this on purpose for nefarious reasons.
Not only does it break all precompiled binaries making it unnecessarily difficult to use precompiled binaries, if you happen to work for a software company that makes use of third party proprietary libraries, then whenever they upgrade to the latest version of visual studios - your company must now do the same thing or the code will no longer run.
As an aside, how does linux avoid this? Although I said I preferred developing on it and I understand the mechanics of linking, I haven't maintained any application long enough to run into this sort of low level shared libraries versioning problem.
Finally, to sum up: Is it possible to use precompiled binaries with this new manifest scheme? If it is, what was my mistake? If it isn't, does Microsoft honestly think this makes application development easier?
Update - A more concise question: How does Linux avoid the use of Manifest files?
All components in your application must share the same runtime. When this is not the case, you run into strange problems like asserting on delete statements.
This is the same on all platforms. It is not something Microsoft invented.
You may get around this 'only one runtime' problem by being aware where the runtimes may bite back.
This is mostly in cases where you allocate memory in one module, and free it in another.
a.dll
dllexport void* createBla() { return malloc( 100 ); }
b.dll
void consumeBla() { void* p = createBla(); free( p ); }
When a.dll and b.dll are linked to different rumtimes, this crashes, because the runtime functions implement their own heap.
You can easily avoid this problem by providing a destroyBla function which must be called to free the memory.
There are several points where you may run into problems with the runtime, but most can be avoided by wrapping these constructs.
For reference :
don't allocate/free memory/objects across module boundaries
don't use complex objects in your dll interface. (e.g. std::string, ...)
don't use elaborate C++ mechanisms across dll boundaries. (typeinfo, C++ exceptions, ...)
...
But this is not a problem with manifests.
A manifest contains the version info of the runtime used by the module and gets embedded into the binary (exe/dll) by the linker. When an application is loaded and its dependencies are to be resolved, the loader looks at the manifest information embedded in the exe file and uses the according version of the runtime dlls from the WinSxS folder. You cannot just copy the runtime or other modules to the WinSxS folder. You have to install the runtime offered by Microsoft. There are MSI packages supplied by Microsoft which can be executed when you install your software on a test/end-user machine.
So install your runtime before using your application, and you won't get a 'missing dependency' error.
(Updated to the "How does Linux avoid the use of Manifest files" question)
What is a manifest file?
Manifest files were introduced to place disambiguation information next to an existing executable/dynamic link library or directly embedded into this file.
This is done by specifying the specific version of dlls which are to be loaded when starting the app/loading dependencies.
(There are several other things you can do with manifest files, e.g. some meta-data may be put here)
Why is this done?
The version is not part of the dll name due to historic reasons. So "comctl32.dll" is named this way in all versions of it. (So the comctl32 under Win2k is different from the one in XP or Vista). To specify which version you really want (and have tested against), you place the version information in the "appname.exe.manifest" file (or embed this file/information).
Why was it done this way?
Many programs installed their dlls into the system32 directory on the systemrootdir. This was done to allow bugfixes to shared libraries to be deployed easily for all dependent applications. And in the days of limited memory, shared libraries reduced the memory footprint when several applications used the same libraries.
This concept was abused by many programmers, when they installed all their dlls into this directory; sometimes overwriting newer versions of shared libraries with older ones. Sometimes libraries changed silently in their behaviour, so that dependent applications crashed.
This lead to the approach of "Distribute all dlls in the application directory".
Why was this bad?
When bugs appeared, all dlls scattered in several directories had to be updated. (gdiplus.dll) In other cases this was not even possible (windows components)
The manifest approach
This approach solves all problems above. You can install the dlls in a central place, where the programmer may not interfere. Here the dlls can be updated (by updating the dll in the WinSxS folder) and the loader loads the 'right' dll. (version matching is done by the dll-loader).
Why doesn't Linux have this mechanic?
I have several guesses. (This is really just guessing ...)
Most things are open-source, so recompiling for a bugfix is a non-issue for the target audience
Because there is only one 'runtime' (the gcc runtime), the problem with runtime sharing/library boundaries does not occur so often
Many components use C at the interface level, where these problems just don't occur if done right
The version of libraries are in most cases embedded in the name of its file.
Most applications are statically bound to their libraries, so no dll-hell may occur.
The GCC runtime was kept very ABI stable so that these problems could not occur.
If a third party DLL will allocate memory and you need to free it, you need the same run-time libraries. If the DLL has allocate and deallocate functions, it can be ok.
It the third party DLL uses std containers, such as vector, etc. you could have issues as the layout of the objects may be completely different.
It is possible to get things to work, but there are some limitations. I've run into both of the problems I've listed above.
If a third party DLL allocates memory that you need to free, then the DLL has broken one of the major rules of shipping precompiled DLL's. Exactly for this reason.
If a DLL ships in binary form only, then it should also ship all of the redistributable components that it is linked against and its entry points should isolate the caller from any potential runtime library version issues, such as different allocators. If they follow those rules then you shouldn't suffer. If they don't then you are either going to have pain and suffering or you need to complain to the third party authors.
I finally came to the conclusion that the only way to get this to work (besides statically linking everything) is that all third party libraries must be compiled using the same version of Visual Studios - ie don't use precompiled dlls - download the source, build a new dll and use that instead.
Alternatively (and the solution we have to use where I work) is that if the third-party libraries that you need to use all are built (or available as built) with the same compiler version, you can "just" use that version. It can be a drag to "have to" use VC6, for example, but if there's a library you must use and its source is not available and that's how it comes, your options are sadly limited otherwise.
...as I understand it. :)
(My line of work is not in Windows although we do battle with DLLs on Windows from a user perspective from time to time, however we do have to use specific versions of compilers and get versions of 3rd-party software that are all built with the same compiler. Thankfully all of the vendors tend to stay fairly up-to-date, since they've been doing this sort of support for many years.)