I wrote a program, that uses a shared library installed on my system. This library is seldom installed on other systems. How do I compile my program so that the library doesn't need to be installed on other systems? I have the source code for the library available. What's the best way?
The other systems of course have the same architecture and OS.
Compile it as a static library and link that into the executable.
Though the OP had solved his problem by answering a different question, there are (at least) two ways to wedge a shared library into your binary in case
there is no source code available
there is no compiler (or build-chain) available
static link does not work or it's not obvious how do it
to preserve memory layout - static link will change it and may "wake-up" hidden bugs
for "permanent link" LD_PRELOAD library into executable
The first is statifier (open source but limited to x86 and x86_64 and only object code)
The second that I know of is magic ermine (by the same developer). It is closed source, but the developer is friendly to opensource projects and ermine has the advantage of supporting more platforms as well as the ability to include all necessary data files within its virtual file system.
http://statifier.sourceforge.net/ and http://www.magicermine.com/
Related
I am currently trying to compile all my applications' dependencies as a static library. My motivation:
Not to rely on any OS provided libraries in order to have a perfectly reproducible code base
Avoid issues when deploying on other systems caused by dynamic linking
Avoid run-time clashes when linking against different versions of a library
Being able to cross-compile for other OS
However, as I initially dreaded I had to go down the rabbit hole quite fast. I am currently stuck with OpenCV and I'm sure there is more to come. However, my main questions are:
Is it possible to build an entirely statically build app (e.g. libc, ligcc, etc. ?)
Is it possible to link all libraries statically but link major components (libgcc, etc.) dynamically?
If not, is it possible to link against statically built libraries (e.g. OpenCV) but to satisfy their dependencies by linking dynamically (zlib, libc, etc.)?
I did research on the internet but couldn't find a comprehensive guide that dwells on the internals of linking (static vs. dynamic). Do you know about a good book / tutorial? Does a book about gcc get me further?
Is this a very stupid idea?
My motivation:
Not to rely on any OS provided libraries in order to have a perfectly reproducible code base
Avoid issues when deploying on other systems caused by dynamic linking
Avoid run-time clashes when linking against different versions of a library
Being able to cross-compile for other OS
Your motivations are all wrong.
For #1, you do not need a fully-static binary. You just need to link against a set of version-controlled libraries using --sysroot facility provided by GNU linkers
For #2, you motivation is misguided.
On Linux, a fully-static binary may crash in mysterious ways if the libc installed on a target system is different from (static) libc the program was built on. That is, a fully-static binary on Linux is (contrary to popular belief) significantly less portable than a dynamically linked one. One should simply never statically link libc.a on Linux.
This alone should make you abandon this approach (at least for any GLIBC based systems).
For #3, don't link against different versions of a library (at program build time), and no clashes will result.
For #4, the same solution as for #1 just works.
Is static linking in Linux portable? I mean, can I use the -static option in gcc and link with every dependencies statically to have a clean output from ldd, and expect that the resulting executable will run portably in another computer with Linux installed? Of course given that the CPU architecture and the kernel version is compatible.
The short answer: Pretty much.
This will make a binary which will run on a kernel which is the same or compatible with the one for which the software was designed.
It may not take into account directory structure and if the binary expects to be able to load any external dependencies dynamically, that might not work.
Assuming there's nothing too fancy going on though, it will work fine.
This is approximately what Go's compiler does to enable shipment of binaries roughly anywhere. This also is a method for making a build forward compatible if you expect to be making OS upgrades that will be disruptive.
Additionally, these static binaries could be run in a FreeBSD kernel with Linux compatibility. As long as the kernel and user space is compatible, the binary should work.
As always, test.
Yes. The static link means it won't depend on any other library.
Maybe. You won't need to worry about dynamic library dependencies. Your statically linked libraries might use system calls or other kernel interfaces that older kernels don't have, so you'll only be forward compatible (the linux kernel has a pretty strong backwards compat policy). The only thing you might need to worry about are external files that your statically linked libraries might depend on (like localization databases and such).
Some linux program for example mongodb binary file can run on different version linux whatever the host machine gcc version and glibc version.
How to do that? static link all libs? But I heard of glibc is not supposed to be static linked.
To make an executable that is independent of the installed libraries, you must statically link it.
However, if the application isn't very large/complex to build, it's often better to either distribute the source and build on/for the target system, or pre-build for the most popular variants.
The reason that you don't want to statically link glibc (and all other libs that the application may use) is that even the most simple application becomes about 700K-1MB. Given that my distribution has 1900 entries in /usr/bin, that would make it around 2GB minimum, where now it is 400MB (and that includes beasts like clang, emacs and skype, all weighing in at over 7MB in non-statically linked form - they probably have more than a dozen library dependencies each - clang, for example, grows from under 10MB to around 100-120MB if you compile it with static linking).
And of course, with static linkage, all the code for each application needs to be loaded into memory as a separate copy. So the overall memory usage goes up quite dramatically.
This question already has answers here:
Static linking vs dynamic linking
(16 answers)
Closed 9 years ago.
Probably on any OS it is possible to compile C++/C standard library statically or dynamically. On Windows I prefer static builds always, because it helps to avoid "dll hell" problem with different versions of libraries installed or not installed on specific Windows version, edition and service pack, etc. Static linking makes software more portable and less dependent on what end user did with his operating system (I even saw examples when end user could make SHIFT+DEL on some DLLs in system32, he couldn't explain why, or when users claim that my app contains virus because it tried to download dynamically linked prerequisites from official Microsoft website...) So, on Windows static linking is usually better than dynamic one in my experience.
However, I am new to Linux, so can anyone share his experience? My question is: what kind of linking (dynamic or static) is preffered on Linux if we ignore the fact that dynamic one allows to save memory & hard drive space and if we plan to distribute software with automated install program (hard drive space and memory are cheap enough now, so there are not reason to sacrifice hours of working time required to create really good and portable installer to win some megabytes of RAM or hard drive space). Are there any Linux-specific issues with dynamic/static linking?
On Linux you normally have a package manager that ensures you only have one version of libraries installed. So there normally is no dll hell and no problem with linking dynamically. Linking dynamically is the standard way on Linux.
I'd say the answer depends on how you distribute the software.
If you package the software for a specific Linux distribution and version dynamic linking is usually preferred. You know which libraries to find on the system and you can specify dependencies.
However, if you want to distribute the software as a Linux binary that runs on "any" system (such as various games or software like Matlab for example) you will end up with the same dll (or .so) hell problem as on windows. You don't know which versions of which libraries are on the system. Thus, you will have to provide your own .so files or link statically.
See the whole point in using dynamic linking is to reduce the size of executables and memory usage.If you neglect that there is too less to talk about.
On the other hand you mentioned about saving memory and disk space.It is necessary to save disk space because when you want to export your app/program, you can't put a 2Gb app on the internet for download(for example openCV library is about 2.1GB). The solution is to dynamically link them and load only those modules which are necessary to you.This enables efficient multitasking also(creates just one copy of the module and the whole program uses the same copy).
peculiarly:
For example, a media player application might originally be shipped
with a codec that
supports the mp3 file format. If the media player were statically linked it would not
be possible to dynamically update it to support a different file format, without
replacing the entire application. Dynamic linking means that a new version of the
shared library containing a more up-to-date codec, which includes some enhancements
and bug fixes, could be dynamically loaded by a dynamic linker into memory at run-time
to replace the original shared library. A shared library can also be shared by more than one application. For example, two
different media players could both use the same shared library containing the same
codec. This potentially means that the device running the application requires less
physical memory, depending on the size of the dynamic linker.
third, in linux everything is dynamically linked except for the /bin/ash.static whic also has its dynamic version /bin/ash but this shouldn't stop you from static linking in linux.
when using gcc the linking is by default dynamic.I guess you should use the "-static" flag to statically link the libraries
#Vitaliy good that you brought this up.The important thing to note here is that Smart linking and the creation of shared (or dynamic) libraries are mutually exclusive, that is, if you turn on smart linking, then the creation of shared libraries is turned of.
smart linking breaks the code into small code blocks and their dependencies are loaded.
So if you are calling a dependency multiple times, it gets loaded multiple times.
This gives a very good execution time but very high compilation time especially for large units.So there is a certain trade-off.
I'm writing a cross-platform application which is not GNU GPL compatible. The major problem I'm currently facing is that the application is linked dynamically with glibc and libstdc++, and almost every new major update to the libraries are not backwards compatible. Hence, random crashes are seen in my application.
As a workaround, I distribute binaries of my application compiled on several different systems (with different C/C++ runtime versions). But I want to do without this. So my question is, keeping licensing and everything in mind, can I link against glibc and libstdc++ statically? Also, will this cause issues with rtld?
You don't need to.
Copy the original libraries you linked against to a directory (../lib in this example) in your application folder.
Like:
my_app_install_path
.bin
lib
documentation
Rename you app for something like app.bin. Substitute your app for a little shell script that sets the enviroment variable LD_LIBRARY_PATH to the library path (and concatenate the previous LD_LIBRARY_PATH contents, if any). Now ld should be able to find the dynamic libraries you linked against and you don't need to compile them statically to your executable.
Remember to comply with the LGPL adding the given attribution to the libraries and pointing in the documentation where the source can be downloaded.
glibc is under the LGPL. Under section 6. of LGPL 2.1, you can distribute your program linked to the library provided you comply with one of five options. The first is to provide the source code of the library, along with the object code (source is optional, not required) of your own program, so it can be relinked with the library. You can alternatively provide a written offer of the same. Your own code does not have to be under the LGPL, and you don't have to release source.
libstdc++ is under the GPL, but with a major exception. You can basically just distribute under the license of your choice without providing source for either your own code or libstdc++. The only condition is that you compile normally, without e.g. proprietary modifications or plugins to GCC.
IANAL, and you should consider consulting one if you need real legal advice.
Specifying the option -static-libgcc to the linker would cause it to link against a static version of the C library, if available on the system. Otherwise it is ignored.
I must question what the heck you are doing with the poor library functions?
I have some cross platform software as well. It runs fine on Linux systems of all sorts. Build with the oldest version of software that you want to support. The glibc and libstdc++ libraries are really very backward compatible.
I have built on CentOS 4 and run it on RHEL 6 beta. No problems.
I can build on stable Debian and run it on testing.
Now, I do sometimes have trouble with some libraries if I try to build on, say old Debian and try to run it on CentOS 5.4. That is usually due to distribution configuration choices that are different, like choosing threading or non-threading.