My c++ app depends on GCC, MongoDB C++ driver and Boost. My current way is to keep the OS consistent. I develop C++ on Ubuntu 12.04 64bit Desktop, and deploy it on Ubuntu 12.04 64bit server. Also I should install the same version of dependencies on target server.
But If I want to develop my C++ app on Ubuntu 13.04 and use newest Boost, MongoDB driver and GCC 4.8.1, which way is easy to deploy C++ app on Ubuntu 12.04 server.
static linking
Dynamic linking, also deploy all dependencies to target server?
Which way is simple? Sometimes, I cannot compile libraries on target server.
If the dependencies are small easiest way is to compile everything statically. It is fairly easy during the build step, and nothing fancy is needed. However, with bigger libraries, and a bigger project this might get inconvenient.
I think that best practice would be to compile dependencies into shared objects ship them along the binaries and execute stuff in a way that ld will look for your stuff 1st. I think it's possible by for example using LD_LIBRARY_PATH e.g. LD_LIBRARY_PATH=/where/did/i/ship/lib:$LD_LIBRARY_PATH my_binary.
It can be somewhat cumbersome as you need to set up your build system to compile stuff as shared objects and properly pack everything.
I'm pretty sure some of the pre-compiled programs that are shipped for linux work this way. Strangely, I can't find any custom pre-compiled app under my hand at the moment.
It depends on your application. If your application consists of only one specific binary, then static linking of all C++ libraries is in order. You can safely link dynamically all C libraries, as the C ABI is unchanging; this just leaves you with version dependencies. However in most cases the major SO-Name verions are mostly compatible and libraries of differing major SO-Name can be installed in parallel. So I'd rely on the package manager to install those. C++ libraries are tricky due to lack of a common ABI. Even a mere compiler version bump can make them incompatible *sigh*.
Related
I have a target system Astra Linux Smolensk, which has access to very outdated packages with GCC 6 among them. It does not support C++17, which is required for some very useful 3rd party libraries and language features. At some point I am considering a possibility of using concepts from 20 standard. But there is no easy way to achieve that.
Basically, I see 2 ways:
Compile needed version of GCC by myself and have a custom version of Astra Linux for building with additional packages (which is not a good option since we have restrictions for system modification). I am about to try out this option, but it is not the subject of this question.
Cross-compile with latest GCC on Ubuntu using a toolchain.
So, what do I need in order to create a toolchain for a custom version of Linux? The only thing I am sure of is the Linux Core version. Can I use an existing Linux toolchain or do I have to export system libraries and create a custom toolchain?
Here I found out some tools that seem to be helpful. For instance:
Buildroot
Buildroot is a complete build system based on the Linux kernel configuration system and supports a wide range of target architectures. It generates root file system images ready to be written to flash. In addition to having a huge number of packages which can be compiled into the image, it also generates a cross toolchain to build those packages from source. Even if you don't want to use buildroot for your root filesystem, it is a useful tool for generating a toolchain. Buildroot supports uClibc-ng, glibc and musl.
I wonder if it does what I need and can I use a latest GCC compiler with those generated toolchains.
I found similar questions:
How to build C++17 application in old linux distro with old stdlib and libc?
https://askubuntu.com/questions/162465/are-gcc-versions-tied-to-kernel-versions
How can I link to a specific glibc version?
Some clarification is needed:
The project relies heavily on a lot of 3rd party dependencies from the target linux's package repository. Moreover, I use dynamic .so modules that may be loaded both implicitly and explicitly.
Today with docker and modern CI/CD pipelines based on container, we don't rely on process compile very often as old days.
With the help of musl, we can even create universal Linux binaries with static linkage: We use all static libraries rather than dynamic libraries. Then we ship one executable file.
According to musl's doc, it needs
Linux kernel >=2.6.39
This is a very old version released around 2011, so even old Linux distro's can run our binaries.
Musl is widely used in many projects, especially in Rust projects, we provide Musl builds for users as conveniences.
Note that we may need to fix our codebase when using Musl, there are very slight differences with GNU libc, which we should be aware.
When I develop across different OS's I find that a program built on one Linux system can not be run on another system, due to the different libc version.
How can I build in all the shared libraries just like golang did in c/c++?
Including libc and libcxx
If you want to run on multiple Linux systems, all you really need is to build using the oldest glibc from any of them. The easiest way is to simply download a virtual machine image from an old system like CentOS 5 and build there. You don't need to worry about static linking, just building against an old version will mean you are mostly compatible with newer versions.
I'm working on a large project, cross platform between Linux and OSx.
I'm would like to include boost functionalities, but I don't want to force all the the collaborators to install on their machine (with totally different environment) all the boost libraries.
If I compile boost on my machine, and put the static libraries inside the repository, which problem could I face? Can my colleagues use the same static libraries on their environment?
AFAIK there will be difference. The static libraries are not the same on OS X and Linux. Also compilation depends on toolset (see boost guide). And there can be issues if IDEs are different.
However you can compile both versions from one platform (see Cross-compilation) and put them in repository, but putting binary objects is not the best idea (even if they don't depend on platform).
I think you could try to compile and link boost on different platforms, maybe you will succeed, but you can not cover all the scenarios for sure. It is better to create boost compilation script and tell everyone to use it.
I have been using a socket library for C++. Some other info: 32 bit Linux, Codelite and GCC toolset. I want to be able to compile my program for Windows using the windows edition of Codelite. The socket library I have been using doesn’t have a mingw32 build of the library, but it’s open source. So how can I make a mingw32 build of the socket library so I can make a windows build using the source provided?
Most open source linux libraries are built with the make build system (although there others like jam etc, and custom written scripts for building). MinGW comes with the make utility, it's mingw32-make.exe. It may be possible (if you're lucky) to simply rebuild your library by making it on Windows.
The more usual scenario is that you will need to configure the project before you can build it though. The windows shell doesn't support the scripting requirements required to configure, but there's another part of the MinGW project that does called MSYS. If you install msys and all the required tools you need for it, you'll be able to ./configure your project before running make.
Of course, the above will only work if the library is written to be portable. There are some breaking difference between the linux socket implementation (sys/socket.h), and the windows implementation (winsock2.h). You may be forced to edit chunks of the code to ensure that it is versioned correctly for the platform (or that any dependencies required are also built for Windows).
Also, there is the chance that the library may already be built for Windows, but using a different compiler like MSVC, which produces .lib and .dll files. Mingw requires .a files for libraries, but a clever feature is the ability to link directly against a .dll, without the need for an imports library, so you can often use an existing windows library that was not built against Mingw (Although this won't help for static linking). There is also a tool, dlltool, which can convert .lib to .a.
If you give detail on the specific library you're working with, I may be able to pick out for you what needs to be done to run it on Win.
You port it to the new platform. :)
You're fortunate that it is opensource, because then it would be practically impossible to port it (You'd have to pay $$$'s to get a copy of the code for a particular license, or rewrite the entire product).
Enjoy.
Alternatively, they may well already have a port... Check the documentation for the library you are using.
First off your going to need to make sure that you aren't including any Linux specific libraries.
I'm writing a cross-platform application which is not GNU GPL compatible. The major problem I'm currently facing is that the application is linked dynamically with glibc and libstdc++, and almost every new major update to the libraries are not backwards compatible. Hence, random crashes are seen in my application.
As a workaround, I distribute binaries of my application compiled on several different systems (with different C/C++ runtime versions). But I want to do without this. So my question is, keeping licensing and everything in mind, can I link against glibc and libstdc++ statically? Also, will this cause issues with rtld?
You don't need to.
Copy the original libraries you linked against to a directory (../lib in this example) in your application folder.
Like:
my_app_install_path
.bin
lib
documentation
Rename you app for something like app.bin. Substitute your app for a little shell script that sets the enviroment variable LD_LIBRARY_PATH to the library path (and concatenate the previous LD_LIBRARY_PATH contents, if any). Now ld should be able to find the dynamic libraries you linked against and you don't need to compile them statically to your executable.
Remember to comply with the LGPL adding the given attribution to the libraries and pointing in the documentation where the source can be downloaded.
glibc is under the LGPL. Under section 6. of LGPL 2.1, you can distribute your program linked to the library provided you comply with one of five options. The first is to provide the source code of the library, along with the object code (source is optional, not required) of your own program, so it can be relinked with the library. You can alternatively provide a written offer of the same. Your own code does not have to be under the LGPL, and you don't have to release source.
libstdc++ is under the GPL, but with a major exception. You can basically just distribute under the license of your choice without providing source for either your own code or libstdc++. The only condition is that you compile normally, without e.g. proprietary modifications or plugins to GCC.
IANAL, and you should consider consulting one if you need real legal advice.
Specifying the option -static-libgcc to the linker would cause it to link against a static version of the C library, if available on the system. Otherwise it is ignored.
I must question what the heck you are doing with the poor library functions?
I have some cross platform software as well. It runs fine on Linux systems of all sorts. Build with the oldest version of software that you want to support. The glibc and libstdc++ libraries are really very backward compatible.
I have built on CentOS 4 and run it on RHEL 6 beta. No problems.
I can build on stable Debian and run it on testing.
Now, I do sometimes have trouble with some libraries if I try to build on, say old Debian and try to run it on CentOS 5.4. That is usually due to distribution configuration choices that are different, like choosing threading or non-threading.