Loading desired libraries at run-time - c++

I am trying to analyze some shared libraries in Ubuntu under Intel's Vtune tool. However, since these shared libraries cannot be loaded as independent executable, I have to write a c/c++ program to load them at run time. My question is that how I can know which headers invoke which shared libraries? Is there any command or lookup table to know which header belongs to which .so file in the system??

Related

Lifecycle of a shared library? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed last year.
Improve this question
I've found multiple posts describing in detail the differences between static and shared libraries; however, I have yet to see an overarching view on when the shared library is loaded, what goes on here, and when the library is unloaded. Especially how this is affected by the presence of static variables. I understand this differs from system to system but let's say on Linux.
Static linking - the code is linked directly into the executable program's file. The code effectively loads when the executable program is started. The loader doesn't really know about libraries at this point. All the addresses to invoked functions are resolved at compile/link time.
shared library loading (implicit linking). The code is compiled to link with the stub library (.sa file). When the executable program is started, the loader will need to interrogate the program to find out all its runtime library dependencies, resolve each's location (the .so file), do the position independent code fixups, update the symbol table, etc... That is, when the shared library is pulled into memory, there's some fixups on the addresses in which each function exists at. On linux, there's a common program called ldd that will dump the list of .so file dependencies a program needs at runtime. The loader is effectively doing the equivalent of what ldd does before loading the libraries into user space. Typically, the C/C++ runtimes and most system libraries (posix functions) are loaded this way.
Static/global variables are in a different part of the shared library and are duplicated by the loader for each process that loads the shared library.
shared library load (explicitly linking). The code invokes the dlopen library function to explicitly load a shared library (.so file) at runtime and invokes dlsym to get the address of a function inside that library so it can explicitly execute it. The loader does effectively the same thing as implicit loading, but doesn't pull the code into memory until the dlopen call.
Advantages of shared libraries include smaller executable program files and the shared libraries can be mapped into memory just once and shared among multiple programs running at the same time.
Windows has effectively the same pattern with DLLs. dumpbin.exe is the equivalent tool to ldd on Unix. LoadLibrary and GetProcAddress are the Windows equivalent to dlopen and dlsym.

Do shared libraries (.so) files need to present (or specified) at link time?

Do shared libraries (.so) files need to present (or specified) at link time?
I've read here (Difference between shared objects (.so), static libraries (.a), and DLL's (.so)?) that .so files must be present at compile time, but in my experience this is not true?
Doesn't shared libraries just get linked at runtime using dlopen and dlsym, so that the library may not be present on the system, when the application is linked?
Most shared libraries need to be present both at build time and at run time. Notice that shared libraries are not DLLs (which is a Windows thing).
I assume you code for Linux. Details are different on other OSes (and they matter).
For example, if you are compiling a Qt application, you need the Qt shared libraries (such as /usr/lib/x86_64-linux-gnu/libQt5Gui.so and many others) both when building your application and when running it. Read about the dynamic linker ld-linux.so(8) & about ELF.
But you are asking about dynamic loading (using dlopen(3) with dlsym(3) ...) of plugins.
Then read Levine's Linkers & Loaders, Program Library HowTo, C++ dlopen mini HowTo, and Drepper's How To Write Shared Libraries
See also this answer.
Some libraries and frameworks try to abstract the loading of plugins in an OS-neutral manner. Read e.g. about Qt plugins support, or about POCO shared libraries (poorly named, it is about plugins).
You can have it both way, it all works.
While with library present at compile time,instead of get library by dlopen/LoadLibrary explictly, you can use all the functions directly

If the file is executable, why do I need to install some dependencies?

I have a single 32bit executable binary file that I need to run on my x86_64 machine. If the file is executable (even dynamically linked), why do I need to install some dependencies related to the libraries of the programming language that the binary file programmed with?
[root#server]# file TcpServer
TcpServer: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=0x20fc1da672a6ba3632123abc654f9ea88b34259, not stripped
[root#server]# ./TcpServer</b>
-bash: ./TcpServer: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory`
[root#server]# yum install glibc.i686
[root#server]# ./TcpServer
./TcpServer: error while loading shared libraries: libstdc++.so.6: cannot open shared object file: No such file or directory`
There may be several reasons why you need to install dependencies.
One reason is that when a dynamically linked (it's a misnomer because it hasn't been fully linked yet - it should be called "executable that needs dynamic linking") ELF is to be executed it's not really executed to start with, it got what's called a interpreter that will be executed. This interpreter is actually the dynamic linker that will do the actually linking. If the interpreter is missing or not a valid program the executable can't be executed (compare with a script where the shebang on the first line doesn't name a valid interpreter).
Another is that a dynamic linked when loading will need to be linked with certain dynamic libraries. This means of course that you need the dynamic libraries with which the executable is to be linked with.
A third reason may be that the executable uses files or other dependencies while it runs. For example it might need to invoke some other program, dynamically load libraries or even open files that it expects to be present.
From your result it looks like you've run into the two first problems.
The executable can use some dynamically linked libraries. This means the library is loaded at runtime. You can try to run your file (why not?), but you get a startup failure.
For more details see What do 'statically linked' and 'dynamically linked' mean?
you attempt to run a 32-bit executable on a 64-bit system. That's why your initial run ended with "bad ELF interpreter".
a typical Linux x86-64 system doesn't have 32-bit libraries installed so you need to provide them prior to run a 32-bit dynamically linked executable.
Try to use ldd <your binary> to see what libraries can not be found and install those libraries one by one.

libgomp.so.1: cannot open shared object file

I am using OpenMP in my C++ code.
The libgomp.so.1 exists in my lib folder. I also added its path to the LD_LIBRARY_PATH
Still at run-time I get the error message: libgomp.so.1: cannot open shared object file
At Compile time I compile my code with -fopenmp option.
Any idea what can cause the problem?
Thanks
Use static linking for your program. In your case, that means using -fopenmp -static, and if necessary specifying the full paths to the relevant librt.a and libgomp.a libraries.
This solves your problem as static linking just packages all code necessary to run you program together with your binary. Therefore, your target system does not need to look up any dynamic libraries, it doesn't even matter if they are present on the target system.
Note that static linking is not a miracle cure. For your particular problem with the weird hardware emulator, it should be a good approach. In general, however, there are (at least) two downsides to static linking:
binary size. Imagine if you linked all your KDE programs statically, so you would essentially have hundreds of copies of all the KDE/QT libs on your system when you could have just one if you used shared libraries
update paths. Suppose that people find a security problem in a library x. With shared libraries, it's enough if you simply update the library once a patch is available. If all your applications were statically linked, you would have to wait for all of these developers to re-link and re-release their applications.

Difference between a shared object and a dll

I have a library which at compile time is building a shared object, called libEXAMPLE.so (in the so.le folder), and a dll by the name of EXAMPLE.so (in the dll folder). The two shared objects are quite similar in size and appear to be exactly the same thing. Scouring the internet revealed that there might be a difference in the way programs use the dll to do symbol resolution vs the way it is done with the shared object.
Can you guys please help me out in understanding this?
"DLL" is how windows like to name their dynamic library
"SO" is how linux like to name their dynamic library
Both have same purpose: to be loaded dynamically.
Windows uses PE binary format and linux uses ELF.
PE:
http://en.wikipedia.org/wiki/Portable_Executable
ELF:
http://en.wikipedia.org/wiki/Executable_and_Linkable_Format
I suppose a Linux OS.
In Linux, static libraries (.a, also called archives) are used for linking at compile time while shared objects (.so) are used both for linking at load time and at run time.
In your case, it seems that for some reason the library differentiate the files for linking at load time (libEXAMPLE.so) and linking at run time (EXAMPLE.so) even though those 2 files are exactly the same.