I know this might be a very stupid question but I am new to compiled languages (my domain is mostly scripting languages like PHP, Python or JavaScript).
I am learning C++ for one project where it is the only language I can use.
I wrote a program in Ubuntu 10.10 and then compiled it. I can run the generated binary file from cmd like this and it works:
sudo ./compiled-program
But, I have used some external libraries in the program (OpenCV). Does that mean that all computers where I will run the program will have to have OpenCV installed? Or is OpenCV bundled inside the compiled binary file? Will it work on PCs without OpenCV installed?
You should read a few things about libraries, and particularly what makes the difference between static and dynamic libraries. To quote the basic definitions so you get the point :
A static library, also known as an
archive, consists of a set of routines
which are copied into a target
application by the compiler, linker,
or binder, producing object files and
a stand-alone executable file.
[...]
Dynamic linking involves loading the
subroutines of a library (which may be
referred to as a DLL, especially under
Windows, or as a DSO (dynamic shared
object) under Unix-like systems) into
an application program at load time or
runtime, rather than linking them in
at compile time.
Not a stupid question at all!
The "normal" way this works - is that your program has been linked against a "Shared Library" - in which case, yes, the user needs the OpenCV (or whatever bundle includes the shared library) to work.
If you compiled as a static executable, (using the -static) flag, then it, and all libraries would be included directly into your executable, making a bit of a larger executable that wastes more memory, because it isn't using a shared library.
There are ways that you could compile your program to link only your OpenCV libraries as static - but that only can be done if the bundle included a static library ".a" vs. a shared one ".so".
If you had to build your code against dependencies, like OpenCV, it depends on if you did static or dynamic linking.
See here which has sections covering these ideas: http://en.wikipedia.org/wiki/Library_(computing)
For starters, try doing this on the command line:
ldd compiled-program
You will get output like this (as an example, I did ldd on my python binary in /usr/bin):
birryree#lilun:/usr/bin$ ldd python
linux-gate.so.1 => (0xb7ff7000)
libpthread.so.0 => /lib/i686/cmov/libpthread.so.0 (0xb7fd5000)
libdl.so.2 => /lib/i686/cmov/libdl.so.2 (0xb7fd1000)
libutil.so.1 => /lib/i686/cmov/libutil.so.1 (0xb7fcd000)
libssl.so.0.9.8 => /usr/lib/i686/cmov/libssl.so.0.9.8 (0xb7f82000)
libcrypto.so.0.9.8 => /usr/lib/i686/cmov/libcrypto.so.0.9.8 (0xb7e2a000)
libz.so.1 => /usr/lib/libz.so.1 (0xb7e16000)
libm.so.6 => /lib/i686/cmov/libm.so.6 (0xb7df0000)
libc.so.6 => /lib/i686/cmov/libc.so.6 (0xb7caa000)
/lib/ld-linux.so.2 (0x80000000)
Python wants a lot of additional stuff, like libssl (part of OpenSSL), the GNU C library (libc), and some others.
Now if you're going to be moving this thing around to other systems, you either hope they have an environment similar to yours, distribute it as source and use something like the autotools/GNU Build System to build it, or you can forego all that and statically link everything into your binary, which will bring in all the stuff your executable needs without need for a dynamic link.
If you've "compiled against" OpenCV, then machines running your app need it too. You need to copy the libs when you install your app, or ensure that they're already installed.
It depends on wether you are compiling against a shared (Dynamic) library or compiling it into your executable (compiling against a static library). If you are compiling against a shared library you need to distribute the shared library ... otherwise .. you don't.
There are two kinds of libraries, static and dynamically loaded.
Statically loaded libraries are joined with your binary file, while dynamically loaded libraries are loaded at runtime.
It depends on whether or not the executable is statically-built or dynamically linked. In a statically-built executable it is the case that the library files the executable needs are compiled into the executable and there isn't an need to carry around additional library files. In a dynamically linked executable it is the case that the library files the executable needs are linked at runtime and therefore a copy of the library files are needed at runtime.
Related
I have a single 32bit executable binary file that I need to run on my x86_64 machine. If the file is executable (even dynamically linked), why do I need to install some dependencies related to the libraries of the programming language that the binary file programmed with?
[root#server]# file TcpServer
TcpServer: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=0x20fc1da672a6ba3632123abc654f9ea88b34259, not stripped
[root#server]# ./TcpServer</b>
-bash: ./TcpServer: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory`
[root#server]# yum install glibc.i686
[root#server]# ./TcpServer
./TcpServer: error while loading shared libraries: libstdc++.so.6: cannot open shared object file: No such file or directory`
There may be several reasons why you need to install dependencies.
One reason is that when a dynamically linked (it's a misnomer because it hasn't been fully linked yet - it should be called "executable that needs dynamic linking") ELF is to be executed it's not really executed to start with, it got what's called a interpreter that will be executed. This interpreter is actually the dynamic linker that will do the actually linking. If the interpreter is missing or not a valid program the executable can't be executed (compare with a script where the shebang on the first line doesn't name a valid interpreter).
Another is that a dynamic linked when loading will need to be linked with certain dynamic libraries. This means of course that you need the dynamic libraries with which the executable is to be linked with.
A third reason may be that the executable uses files or other dependencies while it runs. For example it might need to invoke some other program, dynamically load libraries or even open files that it expects to be present.
From your result it looks like you've run into the two first problems.
The executable can use some dynamically linked libraries. This means the library is loaded at runtime. You can try to run your file (why not?), but you get a startup failure.
For more details see What do 'statically linked' and 'dynamically linked' mean?
you attempt to run a 32-bit executable on a 64-bit system. That's why your initial run ended with "bad ELF interpreter".
a typical Linux x86-64 system doesn't have 32-bit libraries installed so you need to provide them prior to run a 32-bit dynamically linked executable.
Try to use ldd <your binary> to see what libraries can not be found and install those libraries one by one.
I am using OpenMP in my C++ code.
The libgomp.so.1 exists in my lib folder. I also added its path to the LD_LIBRARY_PATH
Still at run-time I get the error message: libgomp.so.1: cannot open shared object file
At Compile time I compile my code with -fopenmp option.
Any idea what can cause the problem?
Thanks
Use static linking for your program. In your case, that means using -fopenmp -static, and if necessary specifying the full paths to the relevant librt.a and libgomp.a libraries.
This solves your problem as static linking just packages all code necessary to run you program together with your binary. Therefore, your target system does not need to look up any dynamic libraries, it doesn't even matter if they are present on the target system.
Note that static linking is not a miracle cure. For your particular problem with the weird hardware emulator, it should be a good approach. In general, however, there are (at least) two downsides to static linking:
binary size. Imagine if you linked all your KDE programs statically, so you would essentially have hundreds of copies of all the KDE/QT libs on your system when you could have just one if you used shared libraries
update paths. Suppose that people find a security problem in a library x. With shared libraries, it's enough if you simply update the library once a patch is available. If all your applications were statically linked, you would have to wait for all of these developers to re-link and re-release their applications.
I have a library which at compile time is building a shared object, called libEXAMPLE.so (in the so.le folder), and a dll by the name of EXAMPLE.so (in the dll folder). The two shared objects are quite similar in size and appear to be exactly the same thing. Scouring the internet revealed that there might be a difference in the way programs use the dll to do symbol resolution vs the way it is done with the shared object.
Can you guys please help me out in understanding this?
"DLL" is how windows like to name their dynamic library
"SO" is how linux like to name their dynamic library
Both have same purpose: to be loaded dynamically.
Windows uses PE binary format and linux uses ELF.
PE:
http://en.wikipedia.org/wiki/Portable_Executable
ELF:
http://en.wikipedia.org/wiki/Executable_and_Linkable_Format
I suppose a Linux OS.
In Linux, static libraries (.a, also called archives) are used for linking at compile time while shared objects (.so) are used both for linking at load time and at run time.
In your case, it seems that for some reason the library differentiate the files for linking at load time (libEXAMPLE.so) and linking at run time (EXAMPLE.so) even though those 2 files are exactly the same.
I have a requirement that I link all my libraries statically including libstdc++, libc, pthread etc. There is one omniorb library which I want to link dynamically.
Currently I have dynamically linked all the libraries.
ldd shows the following
linux-vdso.so.1 => (0x00007fff251ff000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f291cc47000)
libomniDynamic4.so.1 (0x00007f291c842000)
libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x00007f291c536000)
libm.so.6 => /lib64/libm.so.6 (0x00007f291c2e0000)
libgomp.so.1 => /usr/lib64/libgomp.so.1 (0x00007f291c0d7000)
libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f291bebf000)
libc.so.6 => /lib64/libc.so.6 (0x00007f291bb66000)
/lib64/ld-linux-x86-64.so.2 (0x00007f291ce63000)
librt.so.1 => /lib64/librt.so.1 (0x00007f291b95d000)
libomniORB4.so.1 (0x00007f291b6aa000)
libomnithread.so.3 (0x00007f291cf35000
I need ldd to show libomniDynamic4.so.1 as the only dynamically linked library.
How do I achieve this?
Trying to make a linux executable that runs on all distros eh? Good luck...But I digress...
You want to look at the -v flag output for g++. It shows the internal link commands executed by g++/ld. Specifically, you'll want to inspect the the final link command collect2 and all of its arguments. You can then specify the exact paths to the .a libs you want to link against. You'll also have to track down static libs of everything. My libstdc++.a is in /usr/lib/gcc/x86_64-linux-gnu/4.4/libstdc++.a
rant on: My biggest complaint about linux is the fractured state of executables. Why cant I compile a binary on one machine and copy it to another and run it!? Even the Ubuntu distros one release apart will produce binary files that cannot be run on the other due to libc/libstdc++ ABI incompatibilites
edit #1 I just wanted to add that The script on this page produces a .png of an executables .so dependencies. This is very useful when attempting to do what you describe.
Be aware ldd <exename> will list all dependencies down the chain, not just immediate dependencies of the executable. So even if your executable only depended upon omniorb.so, but omniorb.so depended upon, libphread.so, ldd's output would list that. Look up the manpage of readelf to find only the immediate dependencies of a binary.
One other item that to be aware of. if omniorb.so depends upon libstdc++.so, you'll have no choice but to be dependant on that same lib. Otherwise ABI incompatibilities will break RTTI between your code and omniorb's code.
I need ldd to show libomniDynamic4.so.1 as the only dynamically linked library.
That is impossible.
First, ldd will always show ld-linux-x86-64.so.2 for any (x86_64) binary that requires dynamic linking. If you use dynamic linking (which you would with libomniDynamic4.so.1), then you will get ld-linux-x86-64.so.2.
Second, linux-vdso.so.1 is "injected" into your process by the kernel. You can't get rid of that either.
Next, the question is why you want to minimize use of dynamic libraries. The most common reason is usually mistaken belief that "mostly static" binaries are more portable, and will run on more systems. On Linux this is the opposite of true.
If in fact you are trying to achieve a portable binary, several methods exist. The best one so far (in my experience) has been to use apgcc.
It is very difficult to build a single binary that runs on a lot of Linux distros and linking statically is not the key point.
Please note that a binary built with an older glibc version--i.e., an old Linux distro--may run on newer Linux distros as well. This works because glibc is back-compatible.
A possible way to attain the desired result is:
compile the binary on an old Linux OS
find out all the required libraries for your compiled binary using the command ldd or lsof
(when running) on the binary, details here
copy the required libraries of the old Linux OS in a 'custom-lib' folder
always bundle/release this custom-lib folder with your binary
create a bash script that puts the custom-lib folder on top of the folders list in LD_LIBRARY_PATH environment variable, and then invokes your binary.
In this way, by executing the binary with the bash script, I was able to execute binaries on a wide range of embedded devices with very different Linux versions.
But there are always problematic cases where this fails.
Please note, I always tested this with cli applications/binaries.
Other possible ways..
There also seems to be elegant ways to compile glibc-back-compatible binaries, for example this that seems to compile binaries compatible with older ABI. But I have not checked this route.
when linking, use -static before specifying the libraries you want to link statically to, and use -dynamic before the libraries you want to link dynamically to. You should end up with a command line looking like this:
g++ <other options here> -dynamic -lomniDynamic4 -static -lpthread -lm -lgomp <etc>
Of course, you'll need .a versions of the libraries you want to link statically (duh).
I'm writing a program that has two libraries that I need to use: v8, and v8-juice. Unfortunately, v8-juice can't be compiled as a static library due to some stuff it does with templates. There's some other quirks with it that require v8 to be compiled as a shared object as well.
So, when I compile my program, I end up having two shared objects that are needed for the executable to run. My question is, is there a way I can include these shared objects without installing them under linux? Sorry if it's a newbish question, I'm fairly new to C++.
Shared libraries can be in the same folder as your executable. man ld.so:
$ORIGIN and rpath
ld.so understands the string $ORIGIN (or equivalently ${ORIGIN}) in an
rpath specification (DT_RPATH or DT_RUNPATH) to mean the directory con-
taining the application executable. Thus, an application located in
somedir/app could be compiled with gcc -Wl,-rpath,'$ORIGIN/../lib' so
that it finds an associated shared library in somedir/lib no matter
where somedir is located in the directory hierarchy. This facilitates
the creation of "turn-key" applications that do not need to be
installed into special directories, but can instead be unpacked into
any directory and still find their own shared libraries.