Where should I be using a static library in C++ - c++

What are the use cases of using static libraries in C++? I have seen that people create DLLs instead or some that use static libraries only. Whats your recommendation?

I'm a big fan of static libraries pretty much everywhere. The one big thing that DLLs get you that static libs cannot do is the ability to dynamically load and unload library functionality. So if your application is going to support some sort of hot swapping plugins, you need to use dynamic libs. Otherwise you can probably use static libs.
Static libs open the door to a lot of optimizations that you can't do with dynamic libs because they are performed at link-time. In the microsoft world Link Time Code Generation (LTCG) give you the ability to do whole program optimization and dead code stripping through not only your application, but also your libraries (in gcc this is called Link Time Optimization [LTO])
Additionally static libs tend to make your program easier to distribute because you aren't forced to pass around a lot of library files, and you can completely avoid DLL-hell if you ever were to version your library.

You should use shared libraries (DLL) if you have a significant functionality that needs to be shared between applications; AND this functionality may be improved independant of all the application and updates shipped seprately.
The 'AND' part is the hardest to fulfill: usually you ship your application with any new functionality added and never update the library without updating the application at the same time (I am not saying that never happens) but usually the two ship in lockstep.
Otherwise it is easier to just build normal libs and ship the application.
An Example of a good (I use the term loosely for example purposes) is DirectX. When a new version of DirectX is shipped (and the interface has not changed) you just need to update the DLL and all apllications that use DirectX get the benifit of the new version of the library. In reality it is not quite that simple but you get the idea.

In general, although there are always exceptions to the rule, I would say:
Advantages of DLLs
Less physical memory usage when running multiple instances of an application. (Copy on write optimisation of memory usage.)
Faster link times.
Smaller executables.
Better modularity.
Advantages of static libraries
Less virtual memory usage (and probably less physical memory usage) when running a single instance of an application.
Performance. Approximately 10% (more or less) improvement over DLLs, depending on your application.
Reliability. You tested your application against a specific version (or specific versions) of a library. An upgrade to a DLL could potentially break your application.

There is the advantage of not having to recompile your entire program if you make a change to a dynamically linked library. #Chris makes a good point about dll-hell but if it s a minor bug fix that doesn't affect the API, this can save you the recompilation.
There is a SO post that talks about Windows not being able to apply updates to your program if you statically link their libraries (link to come). Although i think you are more talking about statically linking your own modules.

Use static version of your libraries where you can. Use dynamic libraries where you need to (license, availability or plugin system).

I use static libraries to implement UML's "package" concept. All modules belonging to a package gets put into their own subdirectory, and I create an IDE subproject or makefile for that directory which builds a static library *.a file. Modern IDEs make it possible to work with your top-level package along with sub-packages within the same "workspace".
If a package (or a group of packages) can be deployed separately from the main executable, then I compile it into a shared library (*.so or *.dll) instead and consider it a "component" in UML jargon.

Well a Static DLL would be for holding huge libraries and also for using Multi-Os coode as i like to call it so it's able to be ran on Linux , Windows ...

Related

How do you properly use dynamic linking with Boost on Windows?

If I dynamically link against Boost libraries, I still have to copy the respective Boost DLLs into the folder of the executable for the program to work.
I installed Boost into the recommended path C:\local\boost_1_59_0
Also, taking redistribution into consideration, probably very few people will have Boost installed, and there isn't really a user-friendly redistributable package, like with the Visual C++ libraries.
So does it make more sense to just statically link Boost in order to save some time? I don't really see the benefit of dynamic linking for Boost (on Windows that is!).
Thank you for your tips.
You are correct in saying that you'll have to distribute the dynamic library too, since not many people will be having it. But a dynamic library is not made just for this purpose only.
It can be useful in case multiple applications have to use that library simultaneously.
For example, if your client has multiple applications using the boost dynamic libraries, it makes sense to just send the dynamic library once, and install it into a commonly accessible location and let all those applications use it. This way, the individual sizes of those applications will remain small.
Another use case could be your client simultaneously running multiple instances of a single executable which requires the library.

C++ import library by source

I'm new to C++ and wonder if it is good practice to include a library by source code. If it is, what would be the best way to achieve this? Just copying in a subfolder and using include?
In my special case, I have written a small library and I'm going to use it on two different microprocessors. Compiling the library separately, copying all headers and using this "package" seems to be overkill for me.
Compiling the library separately is what should be done.
It's not that overkill either : you're just compiling the .o files for your library, then wrapping them in an archive and handling that archive around.
Normally libraries are used as libraries because it is much easier and comfortable that way. If you are using dynamic libraries (.dll or .so) things get even better because you can replace libraries on the fly and things should continue to work smoothly.
You decided to use code repositories instead of libraries which means probably more work for you. If you are happy this way that's OK, but just make sure you do not break any license, some lgpl packages (like Qt) clearly
require their libraries to be linked dynamically.
The best way to do this: hard to say but in your place I would probably use git and include the libraries as submodules.
Just #includeing source code is a bad idea since it means just to copy the code into your own, things can go wrong that way. For example if there is a static variable somewhere in the library code and the same named static variable in your code you will have a conflict.
Instead you should probably compile the library separately and link it, possibly the same way as you would do anyway (ie you build the library and then you link with that library). But the light weight alternative would be just to compile the additional C++ files and then link the object files together to an executable. Details on how you do that is compiler specific.
There's valid reasons for including the library source in this way, for example if your project needs to modify the library during development it would be easier to do so if the rebuilding of the library is done as a part of the build process of the project. With a well designed build process the library shouldn't have to be rebuilt unless there are actual changes to it.
The value of a library is in part that you link it more often than you compile it, leading to a net saving.
If you control all the source, then whatever build process works best for you is fine.
I agree with πάντα ῥεῖ but I'll also add that the reason it is bad practice is because the compiled library can be stored in your computer in a common location and used by tons of different programs, thereby reducing the amount of data your computer has to store, in memory as well as RAM(if more than one running program uses the same library). An example is openGL which is a library that many games use and is probably already in your system somewhere. If you use windows, software installers link up these libraries to their programs and add them if you don't have them. If you use linux, you will be notified if libraries are missing and prompted to install them. All of that aside, you can, technically use un-compiled libraries but that introduces a number of potential licensing problems as well as additional problems with THEIR dependencies.
By copying source code to other projects and "mixing" it with other source code will stop this library from being a "library". Later on you will be tempted to make a small change in one copy (for CPU) or fix a bug and forget to do the same in the other copy.
There might be additional consideration but you should try to keep the code in one place. Do not Repeat Yourself (DRY) is a very strong and fundamental principal of software engineering with many benefits.

Linux: C/C++ standard library static vs dynamic linking [duplicate]

This question already has answers here:
Static linking vs dynamic linking
(16 answers)
Closed 9 years ago.
Probably on any OS it is possible to compile C++/C standard library statically or dynamically. On Windows I prefer static builds always, because it helps to avoid "dll hell" problem with different versions of libraries installed or not installed on specific Windows version, edition and service pack, etc. Static linking makes software more portable and less dependent on what end user did with his operating system (I even saw examples when end user could make SHIFT+DEL on some DLLs in system32, he couldn't explain why, or when users claim that my app contains virus because it tried to download dynamically linked prerequisites from official Microsoft website...) So, on Windows static linking is usually better than dynamic one in my experience.
However, I am new to Linux, so can anyone share his experience? My question is: what kind of linking (dynamic or static) is preffered on Linux if we ignore the fact that dynamic one allows to save memory & hard drive space and if we plan to distribute software with automated install program (hard drive space and memory are cheap enough now, so there are not reason to sacrifice hours of working time required to create really good and portable installer to win some megabytes of RAM or hard drive space). Are there any Linux-specific issues with dynamic/static linking?
On Linux you normally have a package manager that ensures you only have one version of libraries installed. So there normally is no dll hell and no problem with linking dynamically. Linking dynamically is the standard way on Linux.
I'd say the answer depends on how you distribute the software.
If you package the software for a specific Linux distribution and version dynamic linking is usually preferred. You know which libraries to find on the system and you can specify dependencies.
However, if you want to distribute the software as a Linux binary that runs on "any" system (such as various games or software like Matlab for example) you will end up with the same dll (or .so) hell problem as on windows. You don't know which versions of which libraries are on the system. Thus, you will have to provide your own .so files or link statically.
See the whole point in using dynamic linking is to reduce the size of executables and memory usage.If you neglect that there is too less to talk about.
On the other hand you mentioned about saving memory and disk space.It is necessary to save disk space because when you want to export your app/program, you can't put a 2Gb app on the internet for download(for example openCV library is about 2.1GB). The solution is to dynamically link them and load only those modules which are necessary to you.This enables efficient multitasking also(creates just one copy of the module and the whole program uses the same copy).
peculiarly:
For example, a media player application might originally be shipped
with a codec that
supports the mp3 file format. If the media player were statically linked it would not
be possible to dynamically update it to support a different file format, without
replacing the entire application. Dynamic linking means that a new version of the
shared library containing a more up-to-date codec, which includes some enhancements
and bug fixes, could be dynamically loaded by a dynamic linker into memory at run-time
to replace the original shared library. A shared library can also be shared by more than one application. For example, two
different media players could both use the same shared library containing the same
codec. This potentially means that the device running the application requires less
physical memory, depending on the size of the dynamic linker.
third, in linux everything is dynamically linked except for the /bin/ash.static whic also has its dynamic version /bin/ash but this shouldn't stop you from static linking in linux.
when using gcc the linking is by default dynamic.I guess you should use the "-static" flag to statically link the libraries
#Vitaliy good that you brought this up.The important thing to note here is that Smart linking and the creation of shared (or dynamic) libraries are mutually exclusive, that is, if you turn on smart linking, then the creation of shared libraries is turned of.
smart linking breaks the code into small code blocks and their dependencies are loaded.
So if you are calling a dependency multiple times, it gets loaded multiple times.
This gives a very good execution time but very high compilation time especially for large units.So there is a certain trade-off.

Potential memory risks from linking a statically built library to a shared library using Visual Studio

I'm working on a cross-platform Firebreath plugin which is crashing on Windows. I use a static library containing classes which reference boost.asio. When I link this library against the plugin dll, I observe crashes when interacting with the io_service subsystem (ie, during socket construction). When I link the static library against an ordinary executable, the problem does not occur. When I compile the contents of the static library directly into the plugin dll project, the crash does not occur. I have gone to great lengths to ensure that all aspects of my build environment on Windows are consistent (build modes, version of Visual Studio, etc). In addition, I've firewalled the boost.asio headers, so that the plugin dll code has no visibility of the boost.asio sub-system (to no effect, unfortunately, vs2008 and vs2010). As far as I can tell, I've done everything possible to ensure that the build environment is well behaved but the problem persists.
Can the community offer any advice on potential risks or approaches which could expose or solve the problem?
Two things that are dramatically different between linking a static library vs loading a DLL:
global initialization: In a DLL, they all run. In a static library, the linker only brings in an object file if it satisfies some unresolved external, so systems which rely on components registering themselves using the constructor or intiialization expression of a global object fail.
shared CRT: In a static library, all calls to the Standard library are resolved during link time, and the main application and library function all call the same copy of the Standard library. In a DLL, you risk having two copies of the Standard library, which can be ok if you're careful never to share any Standard library objects (like FILE*, std::string, or even malloc/free pairs) between library and application.
The second thing is most likely what's biting you. There's a lazy way to fix it: Use the Standard library DLL, and a better way of fixing it: figure out memory and object lifetime and don't try to allocate on one side and free on the other, or share C++ class layout across the boundary. Virtual functions work fine across the boundary.
The advantage of the "better" way is that plugins can be built in any version of the compiler, which becomes a big deal for maintenance later on in the development cycle.
In the FireBreath plugin prep script, try turning on the WITH_DYNAMIC_MSVC_RUNTIME flag.

C++ application - should I use static or dynamic linking for the libraries?

I am going to start a new C++ project that will rely on a series of libraries, including part of the Boost libraries, the log4cxx or the google logging library - and as the project evolves other ones as well (which I can not yet anticipate).
It will have to run on both 32 and 64 bit systems, most probably in a quite diverse Linux environment where I do not expect to have all the required libraries available nor su privileges.
My question is, should I build my application by dynamically or statically linking to all these libraries?
Notes:
(1) I am aware the static linking might be a pain during development (longer compile times, cross-compiling for both 32 and 64 bit, going down dependency chains to include all libraries, etc), but it's a lot easier during testing - just move the file and run.
(2) On the other hand, dynamic linking seams easier during development phase - short compile times, (don't really know how to handle dynamic linking to 64 bit libraries from my 32 bit dev environment), no hustle with dependency chains. Deployment of new versions on the other hand can be ugly - especially when new libraries are required (see condition above of not having su rights on the targeted machines, nor these libraries available).
(3) I've read the related questions regarding this topic but couldn't really figure out which approach would best fit my scenario.
Conclusions:
Thank you all for your input!
I will probably go with static linking because:
Easier deployment
Predictable performance and more consistent results during perf. testing (look at this paper: http://www.inf.usi.ch/faculty/hauswirth/publications/CU-CS-1042-08.pdf)
As pointed out, the size and duration of compilation of static vs. dynamic does not seem to be such a huge difference
Easier and faster test cycles
I can keep all the dev. cycle on my dev. machine
Static linking has a bad rap. We have huge hard drives these days, and extraordinarily fat pipes. Many of the old arguments in favor of dynamic linking are way less important now.
Plus, there is one really good reason to prefer static linking on Linux: The plethora of platform configurations out there make it almost impossible to guarantee your executable will work across even a small fraction of them without static linking.
I suspect this will not be a popular opinion. Fine. But I have 11 years experience deploying applications on Linux, and until something like LSB really takes off and really extends it's reach, Linux will continue to be much more difficult to deploy applications on. Until then, statically link your application, if you have to run across a wide range of platforms.
I would probably use dynamic linking during (most of) development, and then change over to static linking for the final phases of development and (all of) deployment. Fortunately, there's little need for extra testing when switching from dynamic to static linkage of the libraries.
This is another vote for static linking. I haven't noticed significantly longer linking times for out application. The app in question is a ~50K line console app, with multiple libraries that is compiled for a bunch of out of the ordinary machines, mostly supercomputers with 100-10,000 cores. With static linking, you know exactly what libraries you are going to be using, can easily test out new versions of them.
In general, this is the way that most Mac apps are built. It is what allows installation to be simply copying a directory onto the system.
Best is to leave that up to the packager and provide both options in the configure/make scripts. Usually dynamic linking would have the preference since then it would be easy to upgrade the libraries when necessary, i.e. when security vulnerabilities, etc. are discovered.
Note that if you do not have root privileges to install the libraries in the system directories you can compile the program such that it will first look elsewhere for any needed dynamic libraries, this is accomplished by setting the runpath directive in ELF binaries. You can specify such a directory with the -rpath option of the linker ld.