I've been trying to find a way to make my applications compatible between different Ubuntu LTS versions.
However, most of the time it ends up with the "symbol lookup error:" or "cannot find libxxxx.so.xx".
The requirement is very clear, developer should be able to compile the code on one of last 3 Ubuntu LTS (currently 12,14,16-04) versions and the output should be able to run on all 3 last versions. But the problem is getting complex.
Is there any way to do this?
Thanks in advance.
Linux binaries compiled on older distributions are generally compatible with newer ones. The kernel invests a lot of effort in being backwards compatible - as does glibc. This may not be true for all libraries, but in my experience; most try.
So, what you probably want to do is, compile your app on the oldest supported distro and it will most likely work on the newer one(s).
A really simple trick is to ... compile from source on the appropriate distro.
You can even almost automate this as Ubuntu / Canonical give you free accounts on Launchpad. For example, I use my PPA for either backports or unpackaged sources I want at either work, or home, or on Travis CI, ... in a particular release flavour.
Otherwise, the very obvious alternative is of course to create a static build which is independent of the runtimes of the particular release. That will work 'forever' or until the kernel changes. In the 20+ years I have used Linux, such a change occurred once (with the introduction of the Elf format).
Related
Assume in my company, we have same application written in C++, running in machines of RHEL5, 6 and 7.
I want to build from one single build server (which is running RHEL7) to get the executable that runs in old previous versions of RHEL. May I know if it is achievable?
I expect if I am building in RHEL7, with corresponding version of gcc and glibc (and other libs) available in RHEL5, the resulting executable should run in RHEL5. Is my understanding correct? Or are there more things to pay attention to?
I expect if I am building in RHEL7, with corresponding version of gcc and glibc (and other libs) available in RHEL5, the resulting executable should run in RHEL5.
In theory, yes. In practice, installing multiple versions of glibc and other libraries on your RHEL7 system is probably a lost cause - especially the very old ones required by RHEL5, especially glibc, which expects to know quite a lot about the system.
The reverse may be easier - build everything on RHEL5, link statically all you can but glibc (it's essentially impossible to link statically against glibc) and hope that forward binary compatibility holds well enough. This is the route that is usually taken ("build on the oldest Linux distribution you want to support"), but I doubt that forward binary compatibility of glibc will work, as RHEL5 is extremely ancient compared to RHEL7.
Getting back to the original plan, it's probably easier to install RHEL5 and 6 in containers inside the RHEL7 machine and build those versions. After all, it's a bit like installing their gcc and libraries versions on the RHEL7 machine, but in extremely well separated sysroots - but without the overhead of having different build machines (they are all clients of the same kernel).
Finally, the extreme route is to use an alternative libc (that depends only from the kernel, choosing the oldest one you want to support) and compiling everything statically. This can be done e.g. with musl, but you'll have to compile your compiler, your libc and all your dependencies. Still, the nice result is that you'll be able to build completely standalone executables, able to run on virtually any kernel after the one you decided is your minimum requirement.
Red Hat Developer Toolset (DTS) is a GCC offering where you can compile on one major OS release, and deploy on that version plus the next major release. This will cover your RHEL 6 and 7 work. For RHEL 5, you would continue to do that separately.
DTS installs new versions of GCC "along side" the original/base version, so it won't wreck your OS.
I also like the containers idea from Matteo.
See https://developers.redhat.com/products/developertoolset/overview/
I am using GCC to compile a C++ application on Ubuntu 13. I want to be able to use C++11 features in my code, but at the same time still be able to produce a binary that my users can run on older versions of Ubuntu.
If I compile on Ubuntu 13 with the latest version of GCC my binary will not run on Ubuntu 12 since glibc is not forward compatible:
(How compatible are different versions of glibc?)
What are my options?
Is this even possible without requiring my users to jump through massive hoops?
If not, what do my users have to do to be able to run the binary (i.e. can they install the newer glibc on the older version of Ubuntu)?
Note: I don't not want to consider statically linking glibc since:
I've read that this is a very bad idea
Licensing issues
Cross-distribution compatibility issues
Currently my application does not use any C++11 features and I compile on an older version of Ubuntu with an older version of GCC to avoid this problem. But it makes me sad not being able to use the latest and greatest language features :(
You can try to use Boost Libraries which have quite the same features as C++11 and is "more retro-compatible" than C++11 : it will easily compile on older version of Ubuntu.
Otherwise the best option might be to ask to the users of Ubuntu 12.04 to upgrade there GCC from 4.6 to 4.7 or more recent :
http://www.swiftsoftwaregroup.com/upgrade-gcc-4-7-ubuntu-12-04/
You are asking "how do I use code that isn't on older systems".
The answer is of course, "Include the code with your project".
If you think through what you're asking, you'll realize that in any case, you'll need the code for the c++11 functions in libstdc++. So if they aren't on ubuntu 12, YOU have to add them. Therefore, you'd have to have it statically linked. it's the only way to ensure it will run on an arbitrary ubuntu12 system.
Well you could make a fancy installation, but in the end, it'd just be your apps "dynamically linking" to the libstdc++, so it may as well be statically linked, since no other program is going to be looking for it on ubuntu12
In general, a c++ library is compatible only if the same compiler is used and (!) the versions of the compilers are matching (you might be lucky, though). There is no way to be portable in this sense, besides writing C-code.
If I want my software to run on Red Hat Linux 6.0, do I have to build it on 6.0? Or can I build it on 6.3? (Similar question for 5.X) I'm asking a general question about runtime implications of shared libraries and similar "automatic" dependencies that get sucked in during the build process. And I'm interested only in the divergence between minor releases. I know that more things change between major releases. I'm interesting specifically in RH and RH-derived distributions. My program is written in C and C++. I think the biggest dependency I need to worry about is the GCC runtime libraries for C and C++. Is there a web page I can use to verify which GCC updates were used in which RH minor releases?
To be clear: I understand the goal and commitment to compatibility between update releases going forward. Upgrading from 6.1 to 6.2 should not break my existing applications. In order to build on a newer update and run on an older update, I would need the reverse kind of compatibility. I need 6.1 to be compatible with things built on 6.2. In general this kind of compatibility is impossible to achieve on a wide-spread basis, across all config files, libraries etc. But I only need a narrow slice of reverse compatibility.
I have an app that was designed, written and built successfully on 6.1. Now I want to build it on 6.2, but I want it to still run correctly on 6.1. Is there a general software release practice on Linux that you always have to build on the oldest update release that you want to support? Or do most people use trial-and-error to determine whether their app runs on older update releases? If you use trial-and-error, how much "error" shows up in the equation?
I was recently tasked with performing a feasibility study based around switching from using DOS to Linux for use as an OS to run our industrial control software (developed internally). In a nutshell I have been restricted to using Ubuntu 8.04 (with a vendor supplied kernel upgrade providing drivers for the hardware on the board). As this is no longer supported I am unable to update or install software meaning that I am stuck using gcc version 4.2. I want to be able to use C++ and preferably boost libraries but currently this seems like I will not be able to do so.
Basically I am asking how do companies/professionals go about using Linux as a development environment? Is what I described above a common occurrence? Do you simply pick a version and a compiler and stick with it throughout the product lifetime to ensure that the development environment doesn't change too much or can you freely upgrade the kernel, compiler etc. as you go along? Is it common to be constrained by what a particular vendor can provide. Would anyone be prepared to give their opinion as to whether ubuntu 8.04 is a suitable choice of OS for development of industrial control software?
I am not a linux expert at all, but my research and experimentation so far is leading me to conclusion that I should abandon the linux approach and use DOS. Our company has no linux knowledge and is very small and for personal career reasons I have no interest in learning redundant technology like DOS.
I realise this is not exactly a yes/no type question but any responses will be gratefully received.
GCC 4.2 has no C++11 support but the C++03 support should be good and you should be able to find a version of Boost that can deal with that quite easily.
Ultimately, Linux has many upsides you won't find in DOS- for example, no segmentation, virtual memory, and such things that will make it easier and faster to develop software, not to mention additional libraries you might need, as absolutely nobody whatsoever will support DOS today.
With linux-based systems there's not much reason to stick with fixed OS+toolchain version, because backwards compatibility is a very serious issue in Unix-world. Sometimes it is good to target certain fixed system, but frankly these are rather rare, and even then the development can be done on up-to-date systems as long as testing is done on the target macine/platform.
Basically you could just upgrade to for example Ubuntu 12.04 LTS(long term support) for development and stick with it, it is very unlikely that there would be any sorts of uncompatibility problems on the target platform/machine.
Libraries and such tend to change between Linux distros, new versions of linux distros, and other *nix OSes.
I once worked on a C++ application that had to run on both Windows and RHEL. I was the 'Linux guy' on the team, so I got to deal with coaxing all the open-source linux libraries we were using to build and work on Windows (using cygwin), and getting the latest changes made by the devs working on Windows to work on Linux.
Midway through development, we upgraded to a newer version of RHEL. It was not a fun experience. Library versions had changed, some had been removed in favor of other 'equivalent' libs, etc. Shaking out all of the problems caused by changing gcc versions took a little while too (granted, the newer gcc version was a bit less forgiving and exposed some stuff in our code that probably wasn't quite right anyway).
A couple of days before a big demo, management informed us that the app needed to run on Solaris as well. That was not a trivial task -- Solaris is NOT Linux. They hinted about wanting it to run on IRIX at one point. Glad that didn't happen.
I would recommend that you pick a specific version of a Linux distro, gcc, etc. and stick with it throughout development. Upgrading that stuff can happen later, when the software is in maintenance. RHEL offers long-term support, at a cost. You might also consider the newly released Ubuntu 12.04 LTS
This question may seem blindingly obvious and I realise I am putting myself up for a large number of downvotes but I am very new to Linux dev and have only been working on it for a while.
I have been writing an application on ubuntu 12.04 (kernel 3.2.0) in C++ then copying this via scp to an ubuntu 8.04 (kernel 2.6.30) installation on another device. I have been noticing some very strange behaviour that I simply cannot explain. I have naively assumed that I can run this executable on a previous version, but it is beginning to dawn on me that this in fact may not be the case. In future must I ensure that the Linux version I build my application on is identical to that which it will be running on in the field?? Or must I actually build the application from source code directly on the device it will be running on??? I am very new to Linux dev but not new to C++ so I realise that this question may seem facile, but this is the kind of issue that I have simply not seen in books/tutorials etc.
Most of the time, it's not the kernel that stops you, it's glibc.
glibc is backwards compatible, meaning programs compiled and linked to an older version will work exactly the same with a newer version at runtime. The other way around is not that compatible.
Best is of course to build on the distro you want to run it. If you can't do that, build on the one with the oldest glibc install.
It's also very hard to build and link to an older glibc than the system glibc, installing/building glibc tends to mess up your system more than it's worth. Set up a VM with an old Linux, and use that instead.