Is static linking in Linux portable? - c++

Is static linking in Linux portable? I mean, can I use the -static option in gcc and link with every dependencies statically to have a clean output from ldd, and expect that the resulting executable will run portably in another computer with Linux installed? Of course given that the CPU architecture and the kernel version is compatible.

The short answer: Pretty much.
This will make a binary which will run on a kernel which is the same or compatible with the one for which the software was designed.
It may not take into account directory structure and if the binary expects to be able to load any external dependencies dynamically, that might not work.
Assuming there's nothing too fancy going on though, it will work fine.
This is approximately what Go's compiler does to enable shipment of binaries roughly anywhere. This also is a method for making a build forward compatible if you expect to be making OS upgrades that will be disruptive.
Additionally, these static binaries could be run in a FreeBSD kernel with Linux compatibility. As long as the kernel and user space is compatible, the binary should work.
As always, test.

Yes. The static link means it won't depend on any other library.

Maybe. You won't need to worry about dynamic library dependencies. Your statically linked libraries might use system calls or other kernel interfaces that older kernels don't have, so you'll only be forward compatible (the linux kernel has a pretty strong backwards compat policy). The only thing you might need to worry about are external files that your statically linked libraries might depend on (like localization databases and such).

Related

Checking OS compatibility

I have written an application in C++ without using system specific libraries. I know that if I want to obtain executable binaries for instance for Windows, I need to build my code on this platform. But I am looking for a way to check if my executable is compatible with all windows versions or with all linux distributions. Is there any automatic way to check it? Or I am obliged to check it on my own?
Short answer: it depends on the version of the libc/libstdc++ you are building with.
If you create an executable, unless you link it statically, it will actually be linked against several system libraries. Those libraries version will determine which system your application will be compatible with.
So unfortunately, there is no way to tell, except testing on several systems directly…
And to do so, some systems exist, ex:
https://distrotest.net/

build cpp program run on different version linux

Some linux program for example mongodb binary file can run on different version linux whatever the host machine gcc version and glibc version.
How to do that? static link all libs? But I heard of glibc is not supposed to be static linked.
To make an executable that is independent of the installed libraries, you must statically link it.
However, if the application isn't very large/complex to build, it's often better to either distribute the source and build on/for the target system, or pre-build for the most popular variants.
The reason that you don't want to statically link glibc (and all other libs that the application may use) is that even the most simple application becomes about 700K-1MB. Given that my distribution has 1900 entries in /usr/bin, that would make it around 2GB minimum, where now it is 400MB (and that includes beasts like clang, emacs and skype, all weighing in at over 7MB in non-statically linked form - they probably have more than a dozen library dependencies each - clang, for example, grows from under 10MB to around 100-120MB if you compile it with static linking).
And of course, with static linkage, all the code for each application needs to be loaded into memory as a separate copy. So the overall memory usage goes up quite dramatically.

Is it possible to compile a C/C++ source code that executes in all Linux distributions without recompilation?

Is it possible to compile a C/C++ source code that executes in all Linux distributions without recompilation?
If the answer is yes, can I use any external (non-standard C/C++) libraries?
I want distribute my binary application instead of distribute of source code.
No, you can't compile an executable the executes in all Linux distributions. However, you can compile an executable that works on most distributions that people will tend to care about.
Compile 32-bit. Compile for the minimum CPU level you're willing to support.
Build your own version of glibc. Use the --enable-kernel option to set the minimum kernel version you're willing to support.
Compile all other libraries you plan to use yourself. Use the headers from your glibc build and your chosen CPU/compiler flags.
Link statically.
For anything you couldn't link to statically (for example, if you need access to the system's default name resolution or you need PAM), you have to design your own helper process and API. Release the source to the helper process and let them (or your installer) compile it.
Test thoroughly on all the platforms you need to support.
You may need to tweak some libraries if they call functions that cannot work with this mechanism. That includes dlopen, gethostbyname, iconv_open, and so on. (These kinds of functions fundamentally rely on dynamic linking. See step 5 above. You will get a warning when you link for these.)
Also, time zones tend to break if you're not careful because your code may not understand the system's zone format or zone file locations. (You will get no warning for these. It just won't work.)
Most people who do this are building with the minimum supported CPU being a Pentium 4 and the minimum supported kernel version being 2.6.0.
There are two differences which are among installations. Architecture and libraries.
Having one binary for different architectures is not directly possible; there was an attempt to have binary for multiple archs in one file (fatelf), but it is not widely used and unlikely to gain momentum. So at least you have to distribute separate binaries for ia32, amd64, arm, ... (most if not all amd64 distros have kernel compiled with support for running ia32 code, though)
Distributions contain different versions of libraries. You're fine as long as the API does not change, you can link to that library. Some libs ensure inary backwards-compatibility within major number (so GTK2.2 app will run fine with GTK2.30 lib, but not necessarily vice versa). If you want to be sure, you have to link statically with all libs that you use, except the most basic ones (probably only libc6, which is binary-compatible accross distros AFAIK). This can increase size of the binary, and it one of reasons why e.g. Acrobat Reader is relatively big download, although the app itself is not specially rich functionality-wise.
There was a transitional period for c++ ABI, which changed between gcc 2.9 and 3 (IIRC), but the old ABI would be really just on ancient installations. This should no longer be an isse for you, and if you link statically, it is irrelevant anyway.
Generally no.
There are several bariers.
Different architectures
While a 32bit binary will run on a x86_64 system, it won't work vice versa. Plus there is a lot of ARM systems.
Kernel ABI
Kernel ABI changes very slowly, but it does change, therefore you can't really support all possible versions. Note that in some places kernel 2.2 is still in use.
What you can do is to create a statically linked binary. Such binary will include all libraries your app depends on, and it will work on all systems with the same architecture and a reasonably similar kernel version.

problem with different linux distribution with c++ executable

I have a c++ code that runs perfect on my linux machine (Ubuntu Karmic).
When I try to run it on another version, I have all sort of shared libraries missing.
Is there any way to merge all shared libraries into single executable?
Edit:
I think I've asked the wrong question. I should have ask for a way to static-link my executable when it is already built.
I found the answer in ermine & statifier
There are 3 possible reasons you have shared libraries missing:
you are using shared libraries which do not exist by default on the other distribution, or you have installed them on your host, but not the other one, e.g. libDBI.so
you have over-specified the version at link time, e.g. libz.so.1.2.3 and the other machine has an API compatible (major version 1) but different minor version 2.3, which would probably work with your program if only it would link
the major version of the library has changed, which means it is incompatible libc.so.2 vs libc.so.1.
The fixes are:
don't link libraries which you don't need that may not be on different distros, OR, install the additional libraries on the other machines, either manually or make them dependencies of your installer package (e.g. use RPM)
don't specify the versions so tightly on the command line - link libz.so.1 instead of libz.so.1.2.3.
compile multiple versions against different libc versions.
What you are describing is the use of static libraries instead of shared libraries.
There have been several technical solutions to the original problem noted here, e.g.
compile multiple versions against
different libc versions.
or
install the additional libraries on the
other machines
but if you're in the position of an ISV, there is really just one sane solution:
Get a clean install of an older system, (e.g. Ubuntu 6.x if you're targeting desktops, perhaps as far back as Red Hat 9 if you're targeting servers) and build your software on that. Generally libraries (and definitely libc) are backwards compatible, so you you won't have problems running on newer systems.
Of course if you have non-standard or recent-version lib dependencies this doesn't completely solve the problem. In that case, as other's have suggested, if you want to be robust it's better to dlopen() and report the problems (or run with reduced functionality).
I am not too sure, but you may want to create your executable by statically linking all the libraries.
One alternative is to dynamically load shared libraries using dlopen() and if it fails to load, exit gracefully with the message that the dependent library is required for the executable to work.
The user then may install the appropriate library.
Another possible solution is using statifier (http://statifier.sf.net) or Ermine (http://magicErmine.com)
Both of them are able pack dynamic executable and all of it's needed libraries into one self-containing executable

Portable shared objects?

Is it possible to use shared object files in a portable way like DLLs in Windows??
I'm wondering if there is a way I could provide a compiled library, ready to use, for Linux. As the same way you can compile a DLL in Windows and it can be used on any other Windows (ok, not ANY other, but on most of them it can).
Is that possible in Linux?
EDIT:
I've just woke up and read the answers. There are some very good ones.
I'm not trying to hide the source code. I just want to provide an already-compiled-and-ready-to-use library, so users with no experience on compilation dont need to do it themselves.
Hence the idea is to provide a .so file that works on as many different Linuxes as possible.
The library is written in C++, using STL and Boost libraries.
I highly highly recommend using the LSB app / library checker. Its going to tell you quickly if you:
Are using extensions that aren't available on some distros
Introduce bash-isms in your install scripts
Use syscalls that aren't available in all recent kernels
Depend on non-standard libraries (it will tell you what distros lack them)
And lots, upon lots of other very good checks
You can get more information here as well as download the tool. Its easy to run .. just untar it, run a perl script and point your browser at localhost .. the rest is browser driven.
Using the tool, you can easily get your library / app LSB certified (for both versions) and make the distro packager's job much easier.
Beyond that, just use something like libtool (or similar) to make sure your library is installed correctly, provide a static object for people who don't want to link against the DSO (it will take time for your library to appear in most distributions, so writing a portable program, I can't count on it being present) and comment your public interface well.
For libraries, I find that Doxygen works the best. Documentation is very important, it surely influences my choice of library to use for any given task.
Really, again, check out the app checker, its going to give you portability problem reports that would take a year of having the library out in the wild to obtain otherwise.
Finally, try to make your library easy to drop 'in tree', so I don't have to statically link against it. As I said, it could take a couple of years before it becomes common in most distributions. Its much easier for me to just grab your code, drop it in src/lib and use it, until and if your library is common. And please, please .. give me unit tests, TAP (test anything protocol) is a good and portable way to do that. If I hack your library, I need to know (quickly) if I broke it, especially when modifying it in tree or en situ (if the DSO exists).
If you'd like to help your users by giving them compiled code, the best way I know is to give them a statically linked binary + documentation how they can run the binary. (This is possibly in addition to giving the source code to them.) Most statically linked binaries work on most Linux distributions of the same architecture (+ 32-bit (x86) statically linked binaries work on 64-bit (amd64)). It is no wonder Skype provides a statically linked Linux download.
Back to your library question. Even if you are in an expert in writing shared libraries on Linux, and you take your time to minimize the dependencies so your shared library would work on different Linux distributions, including old and new versions, there is no way to ensure that it will work in the future (say, 2 years). You'll most probably end up maintaining the .so file, i.e. making small modifications over and over again so the .so file becomes compatible with newer versions of Linux distributions. This is no fun doing for a long time, and it decreases your productivity substantially: the time you spend on maintaining the library compatibility would have been much better spent on e.g. improving the functionality, efficiency, security etc. of the software.
Please also note that it is very easy to upset your users by providing a library in .so form, which doesn't work on their system. (And you don't have the superpower to make it work on all Linux systems, so this situation is inevitable.) Do you provide 32-bit and 64-bit as well, including x86, PowerPC, ARM etc.? If the .so file works only on Debian, Ubuntu and Red Hat (because you don't have time time to port the file to more distributions), you'll most probably upset your SUSE and Gentoo users (and more).
Ideally, you'll want to use GNU autoconf, automake, and libtool to create configure and make scripts, then distribute the library as source with the generated configure and Makefile.in files.
Here's an online book about them.
./configure; make; make install is fairly standard in Linux.
The root of the problem is that Linux runs on many different processors. You can't just rely on the processor supporting x86 instructions like Windows does (for most versions: Itanium (XP and newer) and Alpha (NT 4.0) are the exceptions).
So, the question is, how to develop shared libraries for Linux? You could take a look at this tutorial or the Pogram Library Howto.
I know what you are asking. For Windows, MSFT has carefully made the DLLs all compatible, so your DLLs are usually compatible for almost every version of Windows, that's why you call it "portable".
Unfortunately, on Linux there are too many variations (and everyone is thinking to be "different" to make money) so that you cannot get same benefits as Windows, and that's why we have lots of same packages compiled for different distributions, distro version, CPU type, ...
Some say the problem is caused by (CPU) architecture, but it is not. Even on same arch, there's still different between distributions. Once you've really tried to release a binary package, you would know how much hard it is - even C runtime library dependency is hard to maintain. Linux OS lacks too much stuff so almost every services involves dependency issue.
Usually you can only build some binary that is compatible to some distribution (or, several distributions if you are lucky). That's why releasing Linux programs in binary always screwed up, unless bound to some distro like Ubuntu, Debian, or RH.
Just putting a .so file into /usr/lib may work, but you are likely to mess up the scheme that your distro has for managing libraries.
Take a look at the linux standard base - That is the closest thing you will find to a common platform amongst linux distros.
http://www.linuxfoundation.org/collaborate/workgroups/lsb
What are you trying to accomplish?
Tinkertim's answer is spot on. I'll add that it's important to understand and plan for changes to gcc's ABI. Things have been fairly stable recently and I guess all the major distros are on gcc 4.3.2 or so. However, every few years some change to the ABI (especially the C++ related bits) seems to cause mayhem, at least for those wanting to release cross-distro binaries and for users who have gotten used to picking up packages from other distros than they actually run and finding they work. While one of these transitions is going on (all the distros upgrade at their own pace) you ideally want to release libs with ABIs supporting the full range of gcc versions in use by your users.