Can C/C++ build all dynamic libraries into one bin file? - c++

When I develop across different OS's I find that a program built on one Linux system can not be run on another system, due to the different libc version.
How can I build in all the shared libraries just like golang did in c/c++?
Including libc and libcxx

If you want to run on multiple Linux systems, all you really need is to build using the oldest glibc from any of them. The easiest way is to simply download a virtual machine image from an old system like CentOS 5 and build there. You don't need to worry about static linking, just building against an old version will mean you are mostly compatible with newer versions.

Related

How to set up a C++ toolchain for cross-compilation?

I have a target system Astra Linux Smolensk, which has access to very outdated packages with GCC 6 among them. It does not support C++17, which is required for some very useful 3rd party libraries and language features. At some point I am considering a possibility of using concepts from 20 standard. But there is no easy way to achieve that.
Basically, I see 2 ways:
Compile needed version of GCC by myself and have a custom version of Astra Linux for building with additional packages (which is not a good option since we have restrictions for system modification). I am about to try out this option, but it is not the subject of this question.
Cross-compile with latest GCC on Ubuntu using a toolchain.
So, what do I need in order to create a toolchain for a custom version of Linux? The only thing I am sure of is the Linux Core version. Can I use an existing Linux toolchain or do I have to export system libraries and create a custom toolchain?
Here I found out some tools that seem to be helpful. For instance:
Buildroot
Buildroot is a complete build system based on the Linux kernel configuration system and supports a wide range of target architectures. It generates root file system images ready to be written to flash. In addition to having a huge number of packages which can be compiled into the image, it also generates a cross toolchain to build those packages from source. Even if you don't want to use buildroot for your root filesystem, it is a useful tool for generating a toolchain. Buildroot supports uClibc-ng, glibc and musl.
I wonder if it does what I need and can I use a latest GCC compiler with those generated toolchains.
I found similar questions:
How to build C++17 application in old linux distro with old stdlib and libc?
https://askubuntu.com/questions/162465/are-gcc-versions-tied-to-kernel-versions
How can I link to a specific glibc version?
Some clarification is needed:
The project relies heavily on a lot of 3rd party dependencies from the target linux's package repository. Moreover, I use dynamic .so modules that may be loaded both implicitly and explicitly.
Today with docker and modern CI/CD pipelines based on container, we don't rely on process compile very often as old days.
With the help of musl, we can even create universal Linux binaries with static linkage: We use all static libraries rather than dynamic libraries. Then we ship one executable file.
According to musl's doc, it needs
Linux kernel >=2.6.39
This is a very old version released around 2011, so even old Linux distro's can run our binaries.
Musl is widely used in many projects, especially in Rust projects, we provide Musl builds for users as conveniences.
Note that we may need to fix our codebase when using Musl, there are very slight differences with GNU libc, which we should be aware.

Checking OS compatibility

I have written an application in C++ without using system specific libraries. I know that if I want to obtain executable binaries for instance for Windows, I need to build my code on this platform. But I am looking for a way to check if my executable is compatible with all windows versions or with all linux distributions. Is there any automatic way to check it? Or I am obliged to check it on my own?
Short answer: it depends on the version of the libc/libstdc++ you are building with.
If you create an executable, unless you link it statically, it will actually be linked against several system libraries. Those libraries version will determine which system your application will be compatible with.
So unfortunately, there is no way to tell, except testing on several systems directly…
And to do so, some systems exist, ex:
https://distrotest.net/

How to deploy C++ app on Linux

My c++ app depends on GCC, MongoDB C++ driver and Boost. My current way is to keep the OS consistent. I develop C++ on Ubuntu 12.04 64bit Desktop, and deploy it on Ubuntu 12.04 64bit server. Also I should install the same version of dependencies on target server.
But If I want to develop my C++ app on Ubuntu 13.04 and use newest Boost, MongoDB driver and GCC 4.8.1, which way is easy to deploy C++ app on Ubuntu 12.04 server.
static linking
Dynamic linking, also deploy all dependencies to target server?
Which way is simple? Sometimes, I cannot compile libraries on target server.
If the dependencies are small easiest way is to compile everything statically. It is fairly easy during the build step, and nothing fancy is needed. However, with bigger libraries, and a bigger project this might get inconvenient.
I think that best practice would be to compile dependencies into shared objects ship them along the binaries and execute stuff in a way that ld will look for your stuff 1st. I think it's possible by for example using LD_LIBRARY_PATH e.g. LD_LIBRARY_PATH=/where/did/i/ship/lib:$LD_LIBRARY_PATH my_binary.
It can be somewhat cumbersome as you need to set up your build system to compile stuff as shared objects and properly pack everything.
I'm pretty sure some of the pre-compiled programs that are shipped for linux work this way. Strangely, I can't find any custom pre-compiled app under my hand at the moment.
It depends on your application. If your application consists of only one specific binary, then static linking of all C++ libraries is in order. You can safely link dynamically all C libraries, as the C ABI is unchanging; this just leaves you with version dependencies. However in most cases the major SO-Name verions are mostly compatible and libraries of differing major SO-Name can be installed in parallel. So I'd rely on the package manager to install those. C++ libraries are tricky due to lack of a common ABI. Even a mere compiler version bump can make them incompatible *sigh*.

problem with different linux distribution with c++ executable

I have a c++ code that runs perfect on my linux machine (Ubuntu Karmic).
When I try to run it on another version, I have all sort of shared libraries missing.
Is there any way to merge all shared libraries into single executable?
Edit:
I think I've asked the wrong question. I should have ask for a way to static-link my executable when it is already built.
I found the answer in ermine & statifier
There are 3 possible reasons you have shared libraries missing:
you are using shared libraries which do not exist by default on the other distribution, or you have installed them on your host, but not the other one, e.g. libDBI.so
you have over-specified the version at link time, e.g. libz.so.1.2.3 and the other machine has an API compatible (major version 1) but different minor version 2.3, which would probably work with your program if only it would link
the major version of the library has changed, which means it is incompatible libc.so.2 vs libc.so.1.
The fixes are:
don't link libraries which you don't need that may not be on different distros, OR, install the additional libraries on the other machines, either manually or make them dependencies of your installer package (e.g. use RPM)
don't specify the versions so tightly on the command line - link libz.so.1 instead of libz.so.1.2.3.
compile multiple versions against different libc versions.
What you are describing is the use of static libraries instead of shared libraries.
There have been several technical solutions to the original problem noted here, e.g.
compile multiple versions against
different libc versions.
or
install the additional libraries on the
other machines
but if you're in the position of an ISV, there is really just one sane solution:
Get a clean install of an older system, (e.g. Ubuntu 6.x if you're targeting desktops, perhaps as far back as Red Hat 9 if you're targeting servers) and build your software on that. Generally libraries (and definitely libc) are backwards compatible, so you you won't have problems running on newer systems.
Of course if you have non-standard or recent-version lib dependencies this doesn't completely solve the problem. In that case, as other's have suggested, if you want to be robust it's better to dlopen() and report the problems (or run with reduced functionality).
I am not too sure, but you may want to create your executable by statically linking all the libraries.
One alternative is to dynamically load shared libraries using dlopen() and if it fails to load, exit gracefully with the message that the dependent library is required for the executable to work.
The user then may install the appropriate library.
Another possible solution is using statifier (http://statifier.sf.net) or Ermine (http://magicErmine.com)
Both of them are able pack dynamic executable and all of it's needed libraries into one self-containing executable

Boost - "static" vs "shared" libraries

I am building "boost" libraries from boost source code and I have two options: to build it "static" or to build it "shared" (e.g. dynamic). Which is better idea?
I prefer dynamic (shared) linking but when I tried to build boost shared libraries (on Ubuntu Linux), I got lots of errors or warnings (why there are always errors, warning, notes and other stuff when compiling, grrrrrrrr), so I don't know if it was compiled alright?
Thanks.
Better is subjective. Shared cuts down on size, at the risk of dependencies. Static solves dependency issues but increases the size.
For your purposes, I'd say building it in which ever way gets you to code faster is the better solution.
You will almost always want to use shared libraries over static libraries. A key advantage to using shared libraries is that if the library is updated, you can replace the shared libraries with the newer version (assuming binary compatibility) and reap the benefits of the improved implementation without recompiling your application. Additionally, using shared libraries saves space, if multiple programs are using them.
As for the dependencies issue, it is possible to link against a specific version of a shared library, or to place your shared libraries in a special location that is specific to your program -- which doesn't save you space, but which does give you the flexibility associated with shared libraries -- so that should not be a reason to choose static libraries over shared libraries. I am actually hard pressed to come up with a single instance, on a typical desktop, laptop, or server machine where using static libraries is better than using shared libraries.
P.S. If you are trying to install Boost on Ubuntu Linux, just run "sudo apt-get install libboost1.37-dev". You were probably getting errors because you did not install all of Boost's dependencies. These are automatically downloaded and installed when you use Ubuntu's apt-get package manager to install it. Also, it is generally better to use an OSs package manager for installing software packages, than to build from source. For example, using the package system's version of Boost will make it more likely that your software will run smoothly on other Ubuntu Linux deployments which use the package manager's version of Boost.
P.P.S. Boost uses some very advanced features of C++. It kind of pushes C++ to the limit. It is not uncommon to see warnings when compiling Boost. In fact, I have built Boost quite a number of times on various operating systems, and I don't recall a time when there were no warnings.
Static libraries are used when you don't need to dynamically load a componenet into the program. It is compiled into the exe.
A shared library is loaded on runtime, and is usually used for plugins or extentions.
Imo a static library is better here since you will probably load the boost shared library anyway on the program's startup.
Why do you prefer a shared library?
The recommended way to use the Boost C++ libraries on Linux is via shared linking. On an Ubuntu Linux box already configured for development you should not get any errors at all. Compilation warnings are expected -- for various mindset, technical, and time-constraint issues there are a few produced. Since regular release testing covers Ubuntu, I would not worry about functionality of created libraries -- if there's .so, it should work.