problem with different linux distribution with c++ executable - c++

I have a c++ code that runs perfect on my linux machine (Ubuntu Karmic).
When I try to run it on another version, I have all sort of shared libraries missing.
Is there any way to merge all shared libraries into single executable?
Edit:
I think I've asked the wrong question. I should have ask for a way to static-link my executable when it is already built.
I found the answer in ermine & statifier

There are 3 possible reasons you have shared libraries missing:
you are using shared libraries which do not exist by default on the other distribution, or you have installed them on your host, but not the other one, e.g. libDBI.so
you have over-specified the version at link time, e.g. libz.so.1.2.3 and the other machine has an API compatible (major version 1) but different minor version 2.3, which would probably work with your program if only it would link
the major version of the library has changed, which means it is incompatible libc.so.2 vs libc.so.1.
The fixes are:
don't link libraries which you don't need that may not be on different distros, OR, install the additional libraries on the other machines, either manually or make them dependencies of your installer package (e.g. use RPM)
don't specify the versions so tightly on the command line - link libz.so.1 instead of libz.so.1.2.3.
compile multiple versions against different libc versions.

What you are describing is the use of static libraries instead of shared libraries.

There have been several technical solutions to the original problem noted here, e.g.
compile multiple versions against
different libc versions.
or
install the additional libraries on the
other machines
but if you're in the position of an ISV, there is really just one sane solution:
Get a clean install of an older system, (e.g. Ubuntu 6.x if you're targeting desktops, perhaps as far back as Red Hat 9 if you're targeting servers) and build your software on that. Generally libraries (and definitely libc) are backwards compatible, so you you won't have problems running on newer systems.
Of course if you have non-standard or recent-version lib dependencies this doesn't completely solve the problem. In that case, as other's have suggested, if you want to be robust it's better to dlopen() and report the problems (or run with reduced functionality).

I am not too sure, but you may want to create your executable by statically linking all the libraries.

One alternative is to dynamically load shared libraries using dlopen() and if it fails to load, exit gracefully with the message that the dependent library is required for the executable to work.
The user then may install the appropriate library.

Another possible solution is using statifier (http://statifier.sf.net) or Ermine (http://magicErmine.com)
Both of them are able pack dynamic executable and all of it's needed libraries into one self-containing executable

Related

How to create a portable C/C++ program on linux using additional libraries?

I need to create a portable linux program that uses a lot of additional libraries defined from yum (CentOS).
It is forbidden to install new packages on portable machines. There are no necessary libraries there.
How to assemble my program and all packages into a single folder through the gcc compiler? When I move this folder to another machine, my program should start and run successfully.
My program is ONLY allowed to use dynamic libraries. Static libraries are STRICTLY prohibited.
When trying to replace rpath with /usr/lib64/ with my libraries that are stored in my directory, after transferring to another machine, additional libraries give an error (glibc version conflict).
This sounds like a doomed project, for anything non-trivial.
Static libraries are not the issue though. Since they're just collections of .o files, you can unpack them. You can then state that you have just linked object files. Stupid rules give stupid results.
I am ignoring software licensing here, though, but that seems implied by the question. You don't need a license for libraries installed via yum, since YOU aren't shipping them. But you absolutely need licenses when you are shipping these libraries in one form or another as part of your product. And given the stupid rules, (L)GPL is likely out of the question, so you will need to obtain commercial licenses for all 7 libraries.

Boost, inserting static library in repository

I'm working on a large project, cross platform between Linux and OSx.
I'm would like to include boost functionalities, but I don't want to force all the the collaborators to install on their machine (with totally different environment) all the boost libraries.
If I compile boost on my machine, and put the static libraries inside the repository, which problem could I face? Can my colleagues use the same static libraries on their environment?
AFAIK there will be difference. The static libraries are not the same on OS X and Linux. Also compilation depends on toolset (see boost guide). And there can be issues if IDEs are different.
However you can compile both versions from one platform (see Cross-compilation) and put them in repository, but putting binary objects is not the best idea (even if they don't depend on platform).
I think you could try to compile and link boost on different platforms, maybe you will succeed, but you can not cover all the scenarios for sure. It is better to create boost compilation script and tell everyone to use it.

How do I find what libraries need to be installed on a client linux machine if I compile a binary with a newer version of gcc?

Say I have a C++ binary that was compiled with a version of gcc say 4.4.x, which is used on a client linux box.
If I want to upgrade my compiler to use a newer one, say 4.9.3 (because I want to use C++11):
What kind of things would need to be upgraded on the client box to run this new binary? (e.g. .so libraries)
And how would one find this out?
What kind of things would need to be upgraded on the client box to run this new binary?
You will need to ship two shared libraries with your application: libgcc_s and libstdc++. See https://gcc.gnu.org/onlinedocs/libstdc++/manual/abi.html for more details.
You can ship these libraries in the same directory with your executables if you link using $ORIGIN. See https://stackoverflow.com/a/4742034/412080 for more details.
And how would one find this out?
Run ldd and readelf -d on your executables to see what libraries they need.
I'm guessing from the fact that you mention GCC 3.4 that your client system is running RedHat/CentOS/Scientific Linux 4 which is so old that even Red Hat ended support for it three years ago. If you were running any newer version then you would have been able to take advantage of the Developer Toolsets which include a modified version of GCC that statically links newer parts of the standard library into your binary so that it can run on legacy systems without newer glibc/libstdc++ runtimes.
There are two mechanisms to test compatibility of shared libraries:
The SONAME: a canonical name for the library that is used by the linker to reference the library. You can query the list of required libraries for every ELF object (executable or library) with the ldd command, and you need to do this recursively for each referenced library to get a full list of libraries needed.
The symbol version information. This is an additional constraint that allows adding functionality to existing libraries by introducing version requirements per symbol used -- a program only using symbols that have existed for ages will require a lower minimum version of the library than one that uses new functionality.
Both of these need to be fulfilled in order for the program to run.
The typical approach of Linux distributions is to keep a mapping of SONAME to package name (as multiple versions differing in SONAME can be installed concurrently), and a table of versioned symbols to the package revision these were introduced. The appropriate package development tools for your distribution should be able to create a list of dependency specifications that matches your program's requirements; if they fail to do so because symbols are unknown, it is likely that this version cannot be supported on that release of the distribution.
An alternative could be to link statically all libraries (notably libstdc++) other than the libc (and libm and libdl if you use them); then the produced ELF executable depends only of the libc; however, with ancient enough kernels or libc (on the target machine) even that might fail to work...
For details, read Drepper's paper: How To Write Shared Libraries

Linking Statically with glibc and libstdc++

I'm writing a cross-platform application which is not GNU GPL compatible. The major problem I'm currently facing is that the application is linked dynamically with glibc and libstdc++, and almost every new major update to the libraries are not backwards compatible. Hence, random crashes are seen in my application.
As a workaround, I distribute binaries of my application compiled on several different systems (with different C/C++ runtime versions). But I want to do without this. So my question is, keeping licensing and everything in mind, can I link against glibc and libstdc++ statically? Also, will this cause issues with rtld?
You don't need to.
Copy the original libraries you linked against to a directory (../lib in this example) in your application folder.
Like:
my_app_install_path
.bin
lib
documentation
Rename you app for something like app.bin. Substitute your app for a little shell script that sets the enviroment variable LD_LIBRARY_PATH to the library path (and concatenate the previous LD_LIBRARY_PATH contents, if any). Now ld should be able to find the dynamic libraries you linked against and you don't need to compile them statically to your executable.
Remember to comply with the LGPL adding the given attribution to the libraries and pointing in the documentation where the source can be downloaded.
glibc is under the LGPL. Under section 6. of LGPL 2.1, you can distribute your program linked to the library provided you comply with one of five options. The first is to provide the source code of the library, along with the object code (source is optional, not required) of your own program, so it can be relinked with the library. You can alternatively provide a written offer of the same. Your own code does not have to be under the LGPL, and you don't have to release source.
libstdc++ is under the GPL, but with a major exception. You can basically just distribute under the license of your choice without providing source for either your own code or libstdc++. The only condition is that you compile normally, without e.g. proprietary modifications or plugins to GCC.
IANAL, and you should consider consulting one if you need real legal advice.
Specifying the option -static-libgcc to the linker would cause it to link against a static version of the C library, if available on the system. Otherwise it is ignored.
I must question what the heck you are doing with the poor library functions?
I have some cross platform software as well. It runs fine on Linux systems of all sorts. Build with the oldest version of software that you want to support. The glibc and libstdc++ libraries are really very backward compatible.
I have built on CentOS 4 and run it on RHEL 6 beta. No problems.
I can build on stable Debian and run it on testing.
Now, I do sometimes have trouble with some libraries if I try to build on, say old Debian and try to run it on CentOS 5.4. That is usually due to distribution configuration choices that are different, like choosing threading or non-threading.

Boost - "static" vs "shared" libraries

I am building "boost" libraries from boost source code and I have two options: to build it "static" or to build it "shared" (e.g. dynamic). Which is better idea?
I prefer dynamic (shared) linking but when I tried to build boost shared libraries (on Ubuntu Linux), I got lots of errors or warnings (why there are always errors, warning, notes and other stuff when compiling, grrrrrrrr), so I don't know if it was compiled alright?
Thanks.
Better is subjective. Shared cuts down on size, at the risk of dependencies. Static solves dependency issues but increases the size.
For your purposes, I'd say building it in which ever way gets you to code faster is the better solution.
You will almost always want to use shared libraries over static libraries. A key advantage to using shared libraries is that if the library is updated, you can replace the shared libraries with the newer version (assuming binary compatibility) and reap the benefits of the improved implementation without recompiling your application. Additionally, using shared libraries saves space, if multiple programs are using them.
As for the dependencies issue, it is possible to link against a specific version of a shared library, or to place your shared libraries in a special location that is specific to your program -- which doesn't save you space, but which does give you the flexibility associated with shared libraries -- so that should not be a reason to choose static libraries over shared libraries. I am actually hard pressed to come up with a single instance, on a typical desktop, laptop, or server machine where using static libraries is better than using shared libraries.
P.S. If you are trying to install Boost on Ubuntu Linux, just run "sudo apt-get install libboost1.37-dev". You were probably getting errors because you did not install all of Boost's dependencies. These are automatically downloaded and installed when you use Ubuntu's apt-get package manager to install it. Also, it is generally better to use an OSs package manager for installing software packages, than to build from source. For example, using the package system's version of Boost will make it more likely that your software will run smoothly on other Ubuntu Linux deployments which use the package manager's version of Boost.
P.P.S. Boost uses some very advanced features of C++. It kind of pushes C++ to the limit. It is not uncommon to see warnings when compiling Boost. In fact, I have built Boost quite a number of times on various operating systems, and I don't recall a time when there were no warnings.
Static libraries are used when you don't need to dynamically load a componenet into the program. It is compiled into the exe.
A shared library is loaded on runtime, and is usually used for plugins or extentions.
Imo a static library is better here since you will probably load the boost shared library anyway on the program's startup.
Why do you prefer a shared library?
The recommended way to use the Boost C++ libraries on Linux is via shared linking. On an Ubuntu Linux box already configured for development you should not get any errors at all. Compilation warnings are expected -- for various mindset, technical, and time-constraint issues there are a few produced. Since regular release testing covers Ubuntu, I would not worry about functionality of created libraries -- if there's .so, it should work.