Building application for multiple version of RedHat - c++

Assume in my company, we have same application written in C++, running in machines of RHEL5, 6 and 7.
I want to build from one single build server (which is running RHEL7) to get the executable that runs in old previous versions of RHEL. May I know if it is achievable?
I expect if I am building in RHEL7, with corresponding version of gcc and glibc (and other libs) available in RHEL5, the resulting executable should run in RHEL5. Is my understanding correct? Or are there more things to pay attention to?

I expect if I am building in RHEL7, with corresponding version of gcc and glibc (and other libs) available in RHEL5, the resulting executable should run in RHEL5.
In theory, yes. In practice, installing multiple versions of glibc and other libraries on your RHEL7 system is probably a lost cause - especially the very old ones required by RHEL5, especially glibc, which expects to know quite a lot about the system.
The reverse may be easier - build everything on RHEL5, link statically all you can but glibc (it's essentially impossible to link statically against glibc) and hope that forward binary compatibility holds well enough. This is the route that is usually taken ("build on the oldest Linux distribution you want to support"), but I doubt that forward binary compatibility of glibc will work, as RHEL5 is extremely ancient compared to RHEL7.
Getting back to the original plan, it's probably easier to install RHEL5 and 6 in containers inside the RHEL7 machine and build those versions. After all, it's a bit like installing their gcc and libraries versions on the RHEL7 machine, but in extremely well separated sysroots - but without the overhead of having different build machines (they are all clients of the same kernel).
Finally, the extreme route is to use an alternative libc (that depends only from the kernel, choosing the oldest one you want to support) and compiling everything statically. This can be done e.g. with musl, but you'll have to compile your compiler, your libc and all your dependencies. Still, the nice result is that you'll be able to build completely standalone executables, able to run on virtually any kernel after the one you decided is your minimum requirement.

Red Hat Developer Toolset (DTS) is a GCC offering where you can compile on one major OS release, and deploy on that version plus the next major release. This will cover your RHEL 6 and 7 work. For RHEL 5, you would continue to do that separately.
DTS installs new versions of GCC "along side" the original/base version, so it won't wreck your OS.
I also like the containers idea from Matteo.
See https://developers.redhat.com/products/developertoolset/overview/

Related

Linux console app built with binary library works on my machine, but not on others with same Linux version

I have a Linux application written in C++ with proprietary source code so this question is about any mechanisms that could cause what I describe below to happen.
Anyway, the application uses some library functions that perform some data processing and spits out a result. On my machine, I get the expected result. On others, the app either segfaults or outputs nonsense.
It is distributed as a bit of source code and a static library, built on my machine, which can then be built with make and run (and possibly distributed). The OS environment is Ubuntu 18.04.4 LTS (WSL1) and is essentially the same on all of the machines.
The ldd utility indicates that the same libraries are being used and the Ubuntu and g++ versions are identical as well.
The application was run in gdb on one of the machines where it segfaults and it turned out that a specific buffer hadn't been allocated and was nullptr. The size parameter being passed to the constructor in this case is a #define macro.
I am aware that the preferred approach on Linux is to distribute source code, not precompiled binaries, but for IP protection purposes, that might not always be possible. I have also read that there could be subtle library version or kernel-level incompatibilities that can cause this kind of weirdness, but like I said, that doesn't seem to be the case here.
Q: Does anybody know why I might be observing this strange behavior?

Using GCC with new glibc and binutils to build software for system with older sysroot

I have a question since some months and I can't come to an answer with Google for a long time.
Background:
I am cross compiling software for arm based controllers which are running the linux distribution ptxdist. The complete linux image is built with a cross gcc (4.5.2) that was built against glibc-2.13 and binutils-2.21.
The c++ standard is quite old so I built a new toolchain which supports c++11 (gcc 4.8.5). It now is built against glibc-2.20 and binutils-2.24. I want to use that new compiler for my application software on the controller (not the complete image, just this one "main" binary) which is updated through a package management system.
The software seems to run. I just need to set LD_LIBRARY_PATH pointing to libstdc++.so.0.19 instead of libstdc++.so.14 for the binary. It does not accept the new libc, which is libc-2.20 instead of libc-2.13, though.
So binary uses libstdc++.so.0.19 and the rest of the system is unchanged.
Question:
Why is this working?
What risks could I expect running this software and should I anyway?
For example will the binary miss some functions of glibc-2.20 in future because it just gets glibc-2.13 on the target machine? Building gcc-4.8.5 against glibc-2.13 is not possible.
I have read so far that it depends on changes inside the ABI:
Impact on upgrade gcc or binutils
Here it is said that C Code is compatible if build by GCC4.1 to GCC 4.8.
Thank you!
glibc 2.14 introduced the memcpy#GLIBC_2.14 symbol, so pretty much all software compiled against glibc 2.20 will not work on glibc 2.13 because that symbol is missing there. Ideally, you would build your new GCC against glibc 2.13, not glibc 2.20. You claim that building GCC 4.8.5 against glibc 2.13 is not possible, but this is clearly not true in general.
Some newer C++ features will work with the old system libstdc++ because they depend exclusively on templates (from header files) and none of the new code in libstdc++.
You could also investigate how the hybrid linkage model works in Red Hat Developer Toolset. It links the newer parts of libstdc++ statically, while relying on the system libstdc++ for the common, older parts. This way, you get proper interoperability for things like exceptions and you do not have to install a newer libstdc++ on the target.
Good material for this could be here:
Multiple glibc libraries on a single host
Glibc vs GCC vs binutils compatibility
My final solution is this:
I built the GCC 4.8.5 as a cross compiler. I could not manage to build it with the older glibc2.13, only with version 2.20. It may be possible but in my case it did not work. Anyway, that is not a problem because I also built it with the sysroot-flag. Compiling new software depends completely on my old system, including C Runtime. I don't get a new C++ standard with this, but if you switch on compiler optimizations, I experienced better binary compression and performance.
Regarding a new C++ standard, I could link a newer libstdc++ which came with my cross compiler using -l:libstdc++.so.6.0.19 as LDDFLAG. Therefore I only need to copy an additional libstdc++ on my target beside the old libstdc++.
After having a look into the symbols used by the new lib using
strings libstdc++.so.6.0.19 | grep GLIBC_
you can observe that it doesn't depend on any newer symbols than GLIBC_2.4. It looks like I could never run into the problem of missing symbols.
So in my case I have luck using a new C++11 standard without having any changes in the rest of the system. If there are introduced new symbols you need to follow the above links in my post which are pretty informative. But I would never try that for myself. In my case, with the GCC 4.9.4, libstdc++.so.6.0.20 got symbols pointing to GLIBC_2.17. That could give me trouble as I am cross compiling with GLIBC_2.13.

Compiling and running c++ apps in different Ubuntu version

I've been trying to find a way to make my applications compatible between different Ubuntu LTS versions.
However, most of the time it ends up with the "symbol lookup error:" or "cannot find libxxxx.so.xx".
The requirement is very clear, developer should be able to compile the code on one of last 3 Ubuntu LTS (currently 12,14,16-04) versions and the output should be able to run on all 3 last versions. But the problem is getting complex.
Is there any way to do this?
Thanks in advance.
Linux binaries compiled on older distributions are generally compatible with newer ones. The kernel invests a lot of effort in being backwards compatible - as does glibc. This may not be true for all libraries, but in my experience; most try.
So, what you probably want to do is, compile your app on the oldest supported distro and it will most likely work on the newer one(s).
A really simple trick is to ... compile from source on the appropriate distro.
You can even almost automate this as Ubuntu / Canonical give you free accounts on Launchpad. For example, I use my PPA for either backports or unpackaged sources I want at either work, or home, or on Travis CI, ... in a particular release flavour.
Otherwise, the very obvious alternative is of course to create a static build which is independent of the runtimes of the particular release. That will work 'forever' or until the kernel changes. In the 20+ years I have used Linux, such a change occurred once (with the introduction of the Elf format).

Developing C++ applications to run on embedded Linux setup

I am required to write a C++ application to run on an embedded Linux setup (DMP Vortex86DX processor). The vendor provides a minimal linux installation image that can be installed to the board and contains appropriate hardware drivers. My question is motivated by the answer to my previous question about writing Linux software on a particular kernel to run on a different kernel . I don't really know where to start when it comes to writing the software with regards to ensuring compatibility.
My instinctive approach would be to install the same versions of g++ on the embedded device and on my desktop development machine, write the application on the dev maching, copy to the board and compile it there. This seems madness though and I find it hard to believe that this is how embedded software is developed. With regards to the answer to my previous question, is there a way I can simply build on my desktop but use the version of glibc that exists on the embedded device - if so how can enforce linkage to a specific version? Or is it possible to build everything statically so that the application doesn't link to anything dynamically (I doubt this is possible).
I am a total novice to embedded development, and foresee months of frustration unless I can get hold of some good advice or resources. Any pointers or suggestion of where to start will be very gratefully received no matter how simple or trivial they seem - I really am starting at the very bottom with regards to embedded stuff.
OK, given the fact that the Vortex86SX/DX/MX claims to be x86 compatible, a small set of compiler switches should enable you to compile code for your target machine: -m32 to ensure 32bit code, and no -march switch targeting a specific CPU.
Then you'll need to link your code. As long as you don't use anything fancy, but simple established glibc functions, I'd expect the ABI to be the same on your development machine and the embedded system. In other words, you compile against your host libraries, copy the binary to the embedded system, and it should simply run using the libraries available there.
If X-Linux were to use some other libc, like uclibc or similar, then you'd need a cross compiler on your host. I have little experience with Ubuntu in that regard, but I know that the sys-devel/crossdev package for Gentoo linux makes generation of cross-compilers very easy. This can be both for different architectures (not needed in your case) and different libraries (like e.g. uclibc).
I'd say simply give copying the binaries a try, and report back if you encounter any problems there.

Will a C++ Linux app built on version X.XX of the kernel run on an earlier version?

This question may seem blindingly obvious and I realise I am putting myself up for a large number of downvotes but I am very new to Linux dev and have only been working on it for a while.
I have been writing an application on ubuntu 12.04 (kernel 3.2.0) in C++ then copying this via scp to an ubuntu 8.04 (kernel 2.6.30) installation on another device. I have been noticing some very strange behaviour that I simply cannot explain. I have naively assumed that I can run this executable on a previous version, but it is beginning to dawn on me that this in fact may not be the case. In future must I ensure that the Linux version I build my application on is identical to that which it will be running on in the field?? Or must I actually build the application from source code directly on the device it will be running on??? I am very new to Linux dev but not new to C++ so I realise that this question may seem facile, but this is the kind of issue that I have simply not seen in books/tutorials etc.
Most of the time, it's not the kernel that stops you, it's glibc.
glibc is backwards compatible, meaning programs compiled and linked to an older version will work exactly the same with a newer version at runtime. The other way around is not that compatible.
Best is of course to build on the distro you want to run it. If you can't do that, build on the one with the oldest glibc install.
It's also very hard to build and link to an older glibc than the system glibc, installing/building glibc tends to mess up your system more than it's worth. Set up a VM with an old Linux, and use that instead.