Pyinstaller GLIBC_2.15 not found - python-2.7

Generated an executable on Linux 32-bit Ubuntu 11 and tested it on a 32-bit Ubuntu 10 and it failed with a "GLIBC_2.15" not found.

Cyrhon FAQ section says:
Under Linux, I get runtime dynamic linker errors, related to libc. What should I do? The executable that PyInstaller builds is not
fully static, in that it still depends on the system libc. Under
Linux, the ABI of GLIBC is backward compatible, but not forward
compatible. So if you link against a newer GLIBC, you can't run the
resulting executable on an older system. The supplied binary
bootloader should work with older GLIBC. However, the libpython.so and
other dynamic libraries still depends on the newer GLIBC. The solution
is to compile the Python interpreter with its modules (and also
probably bootloader) on the oldest system you have around, so that it
gets linked with the oldest version of GLIBC.
and
How to get recent Python environment working on old Linux distribution? The issue is that Python and its modules has to be
compiled against older GLIBC. Another issue is that you probably want
to use latest Python features and on old Linux distributions there is
only available really old Python version (e.g. on Centos 5 is
available Python 2.4).

Related

Compiling C/C++ for an old Ubuntu version in a newer Ubuntu version

I have build servers that run Ubuntu 18.04 (in a Docker container), but I need to build binaries (various static and shared libraries and executables) for older versions of Ubuntu (e.g. 16.04), without having to install an older version of the OS.
Currently we use sysroot toolchains (that include compiler and libraries etc) and CMake toolchain files for building for other targets (e.g. ARM Poky/Yocto), and it would be ideal if we could use the same approach for building for older (or potentially newer) versions of Ubuntu.
Is it possible?
Anything is possible, but the easiest thing you can do is create a new Docker image (or some other type of machine) with an older OS on it. Then everything will "just work."
If you really don't want to do that, you need to identify all the dependencies, starting with libc, which have symbols missing on the older platform, then figure out how to avoid using those symbols. This will probably waste a ton of time, especially considering you already have one build container (making a second one shouldn't be hard).

How to compile a program on bleeding edge linux to run on old linux

I use install Arch Linux with duel booted Linux Mint 18.1 .In my college we have lubuntu 16.04 and Ubuntu 14.04 installed. I have also enabled testing repos in arch Linux so I get newer packages, thus due to this when I compile any C++ program on Arch it won't run on Linux Mint due to version of shared libraries don't match in mint.
like libMango.so.64 is in arch and libMango.so.60 is on mint. How can I overcome with this ?
so I am asking for how can I compile any C/C++ with newer compiler and shared libraries to to run fine with old shared libraries ? Just like I compile 32 bit programs on 64 bit machine with -m32 flag , is there flag for old shared libraries too ?
I am using gcc 8.1.
how can I compile any C/C++ with newer compiler and shared libraries to to run fine with old shared libraries ?
You cannot do that reliably if the API (or even the ABI, including size and alignment of internal structures, offsets of fields, vtables organization) of those libraries have changed incompatibly.
In general, you'll better recompile your source code on the other computer (and your college might forbid that, if that source is unrelated to your education). BTW, if your source code sits in some git repository (e.g. github if it is open source) transferring on multiple computers is very easy.
Some very few libraries make genuine (and documented) efforts on being compatible with other versions of them in binary form (e.g. at the ABI level), but this is not usual. The Unix and free software tradition is to care about source level compatibility. And the POSIX standard cares only about source compatibility.
You might consider using some chroot-ed environment (see chroot(2) and path_resolution(7) & credentials(7)) to have the essential parts of your older distribution on your newer one. Details are distribution specific (on Debian & Ubuntu, see also schroot and debootstrap). You could also consider running a full distribution in some VM, or using containers à la Docker.
And you might try to link (locally) your executable statically, so compile and link with g++ -static

Run my code on older version of linux

I am new to Linux programming and I wonder, is there a way to run (not recompile) my C++ executable on an older version of Linux of the same distribution?
Example: Say I compiled my code on RHEL 6 and want to run my executable on RHEL 4 or 5.
In Windows when we do this we just install the C++ runtime of the compiler version of C++.
Example: If I use VS2012 to build a C++ project using C++11, I just need to install the C++ runtime of C++11 on the client machine to run my application no matter what version of Windows I am using (of course starting from Windows XP)
The by far easiest way is to make use of the strong future compatibility of glibc and the GCC runtime libraries: compile your executable on the oldest OS you want it to run on, and it should work on anything later without recompiling (that is, some symlinks may be needed to satisfy the dependencies the executable loader is expecting).
In general it is best to compile it for each distribution you want to support, so no unexpected conflicts appear.
Actually, yes - Find your apps deps (using e.g: ldd) and copy them (e.g: libstdc++.so.6) from your build system to somewhere on your target system (e.g: /mylibs). Point your app here (e.g: using patchelf's --rpath and --interpreter). Your app should run (test it!). If not, it's likely that your glibc is incompatible with the older kernel. You can solve this by recompiling the required version of glibc to support the older kernel - using the --enable-kernel=<version> ./configure switch. If your required version of glibc doesn't support that kernel version then you can supply the missing functions in .so's and load them with LD_PRELOAD.

How can I work out why a specific version of a library is in the dependencies?

I'm building a large C++ project using cmake on ubuntu 12.04 and then taking the resulting binary package and trying to run it on ubuntu 11.04. However the program fails saying it needs glibc version 2.14 but can only find up to version 2.13.
How can I find out exactly why glibc=>2.14 is required?
Unlike most libraries, glibc versions its symbols. Every symbol is tagged with a value (e.g. "GLIBC_2.3.4") representing the version of the library where it's interface was last changed. This allows the library to contain more than one version of a given symbol and support binaries compiled against older versions while preserving the ability to evolve. You can see this detail with objdump -T /lib/libc.so.6.
Basically, something in your app was linked against a symbol that was changed since 11.04. Try objdump -T on your binary and see what tags it's looking for.
But broadly, backwards compatibility doesn't work like that in Linux. If you want something to run on older software, you should build it on older software. It's possible to set up a backwards-compatible toolchain on more recent distros, but it's not the default.
When you build your C++ project, it will link to the version of the glibc library on your 12.04 installation. What are the linker options in your build command?
Without knowing exactly what you are building, I'd say you might be better off building on 11.04 and then running on 12.04.

Can I target older linux with newer gcc/clang? C++

Right now I compile my C++ software on a certain old version of linux (SLED 10) using the provided gcc and it can run on most newer versions as they have a newer glibc. Problem is, that old gcc doesn't support C++11 and I'd really like to use the new features.
Now I have some ideas, but I'm sure others have the same need. What's actually worked for you?
Ideas:
Build on a newer system, static link to newer glibc. (Not possible, right?)
Build on a newer system, compile and link against an older glibc.
Build on an older system using an updated gcc, link against older glibc.
Build on a newer system, dynamic link to newer glibc, set RPath and provide our glibc with installer.
As a bonus, my software also support plugins and has an SDK. I'd really prefer that my customers could compile against my libraries without a huge hassle.
Thanks in advance. Ideas welcome, proven solutions preferred.
Build with the newer gcc. Either install the new compiler on the old machine or comile on your new machine and install the necessary dynamic libraries on the old machine.
Note that multiple versions of libc (and also libstdc++) are supported on a single machine since they are typically versioned (i.e. libc.so.5, libc.so.6, etc)