Shared objects and inclusion without installation in C++ under linux - c++

I'm writing a program that has two libraries that I need to use: v8, and v8-juice. Unfortunately, v8-juice can't be compiled as a static library due to some stuff it does with templates. There's some other quirks with it that require v8 to be compiled as a shared object as well.
So, when I compile my program, I end up having two shared objects that are needed for the executable to run. My question is, is there a way I can include these shared objects without installing them under linux? Sorry if it's a newbish question, I'm fairly new to C++.

Shared libraries can be in the same folder as your executable. man ld.so:
$ORIGIN and rpath
ld.so understands the string $ORIGIN (or equivalently ${ORIGIN}) in an
rpath specification (DT_RPATH or DT_RUNPATH) to mean the directory con-
taining the application executable. Thus, an application located in
somedir/app could be compiled with gcc -Wl,-rpath,'$ORIGIN/../lib' so
that it finds an associated shared library in somedir/lib no matter
where somedir is located in the directory hierarchy. This facilitates
the creation of "turn-key" applications that do not need to be
installed into special directories, but can instead be unpacked into
any directory and still find their own shared libraries.

Related

Compile C++ project with all dependencies into a single binary file

I have a C++ project and I want to compile it into a single binary file which contains all the shared libraries, third-parties, and so on.
I want to move the compiled file into different servers without installing any dependencies. That's why I need an all-in-one/bundled binary file.
For example, I've tried to compile this:
g++ sample.cpp -o sample -Wl,-Bstatic -l:libyaml-cpp.so
But I got this error, while I really have libyaml-cpp.so file in the /usr/lib directory:
/usr/bin/ld: attempted static link of dynamic object `/usr/lib/gcc/x86_64-pc-linux-gnu/11.1.0/../../../../lib/libyaml-cpp.so'
collect2: error: ld returned 1 exit status
What's the best way to fix the issue and generally have a compiled file with all dependencies in GNU/Linux platform?
Normally you would use g++ sample.cpp -o sample -static -lyaml-cpp.
But it seems Arch tends to not include static libraries in their packages. Your options are:
Build yaml-cpp yourself to get a static library.
Distribute the dynamic library along with your executable.
To avoid having to install it in system directories, you'll can do one of:
Set LD_LIBRARY_PATH env variable when running your program, to the location of the .so.
Add -Wl,-rpath='$ORIGIN' to the linker flags, then the resulting binary will automatically try to load libraries from its directory, before checking system directories.
You can also modify rpath in existing executables and shared libraries using patchelf. This is useful when you want to include shared libraries that other shared libraries depend on, and the latter weren't compiled with -Wl,-rpath='$ORIGIN'.
You're explicitly demanding linkage against a shared object (.so)!
-l:libyaml-cpp.so
You say your linker can't find -lyaml-cpp, so you seem to be missing the necessary files; that should usually not be the case; the installation of the development files for libyaml-cpp should have included the necessary files.
Generally speaking, there are two routes to go to ensure your program only consists of a single (executable) file:
Static Linking
You can compile a single binary, provided you have access to static libraries for all your dependencies (typically named something like lib*.a). Static libraries behave very similar to ordinary object files when linking the program. Note that your question explicitly refers to a dynamic library (lib*.so), which is the other kind of library that is compiled to be loaded from a file.
Be aware that the glibc does not like to be statically linked, and may not support it at all.
Embed Shared Object as Resource
Alternatively, you can embed shared objects (and other resources) into your executable and unpack it for execution. This can be done in a variety of ways (think self-extracting zip archives, files stored in global variables as byte arrays, this question may get you started).
You will then have to dump the shared libraries into files (e.g., in tempfs) and dlopen them.

Do shared libraries (.so) files need to present (or specified) at link time?

Do shared libraries (.so) files need to present (or specified) at link time?
I've read here (Difference between shared objects (.so), static libraries (.a), and DLL's (.so)?) that .so files must be present at compile time, but in my experience this is not true?
Doesn't shared libraries just get linked at runtime using dlopen and dlsym, so that the library may not be present on the system, when the application is linked?
Most shared libraries need to be present both at build time and at run time. Notice that shared libraries are not DLLs (which is a Windows thing).
I assume you code for Linux. Details are different on other OSes (and they matter).
For example, if you are compiling a Qt application, you need the Qt shared libraries (such as /usr/lib/x86_64-linux-gnu/libQt5Gui.so and many others) both when building your application and when running it. Read about the dynamic linker ld-linux.so(8) & about ELF.
But you are asking about dynamic loading (using dlopen(3) with dlsym(3) ...) of plugins.
Then read Levine's Linkers & Loaders, Program Library HowTo, C++ dlopen mini HowTo, and Drepper's How To Write Shared Libraries
See also this answer.
Some libraries and frameworks try to abstract the loading of plugins in an OS-neutral manner. Read e.g. about Qt plugins support, or about POCO shared libraries (poorly named, it is about plugins).
You can have it both way, it all works.
While with library present at compile time,instead of get library by dlopen/LoadLibrary explictly, you can use all the functions directly

Linking libraries in c++

I have a C++ file a.cpp with the library dependency in the path /home/name/lib and the name of the library abc.so.
I do the compilation as follows:
g++ a.cpp -L/home/name/lib -labc
This compiles the program with no errors.
However while running the program, I get the ERROR:
./a.out: error while loading shared libraries: libabc.so.0: cannot open shared object file: No such file or directory
However if before running the program, I add the library path as
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/name/lib;
and compile and run now, it works fine.
Why am I not able to link the library by giving it from the g++ command?
Because shared object libraries are linked at runtime - you either need to add the location to the library search path (as you've done), place it somewhere within an already existing path (like /usr/lib), or add a symbolic link to an existing library path that links to the location where it exists.
If you want to compile the library in so there is no runtime dependency, you'll need a static library (which would be abc.a) - note that this has numerous downsides, however (if the library is updated, you'll need to recompile your executable to incorporate that update).
Why am I not able to link the library by giving it from the g++ command?
You are able to link, and you did link the library succesfully. Otherwise you would not be able to build executable (in your case a.out). The problem you mixed 2 different things: linking with shared libraries and loading them at runtime. Loading shared libraries is a pretty complex concept and described pretty well here Program-Library-HOWTO read from 3.2.
You are linking dynamically, is the default behavior with GCC. LD_LIBRARY_PATH is used to specify directories where to look for libraries (is a way of enforce using an specific library), read: Program-Library-HOWTO for more info. There is also an ld option -rpath to specify libraries search path for the binary being compiled (this info is written in the binary and only used for that binary, the LD_LIBRARY_PATH affect other apps using the same library, probably expecting a new or old version).
Linking statically is possible (but a little tricky) and no dependency would be required (but sometimes is not recommended, because prevent the update of the dependent libraries, for example for security reason, in static linking your always are using the versions of the libraries you have when compiled the binary).

libgomp.so.1: cannot open shared object file

I am using OpenMP in my C++ code.
The libgomp.so.1 exists in my lib folder. I also added its path to the LD_LIBRARY_PATH
Still at run-time I get the error message: libgomp.so.1: cannot open shared object file
At Compile time I compile my code with -fopenmp option.
Any idea what can cause the problem?
Thanks
Use static linking for your program. In your case, that means using -fopenmp -static, and if necessary specifying the full paths to the relevant librt.a and libgomp.a libraries.
This solves your problem as static linking just packages all code necessary to run you program together with your binary. Therefore, your target system does not need to look up any dynamic libraries, it doesn't even matter if they are present on the target system.
Note that static linking is not a miracle cure. For your particular problem with the weird hardware emulator, it should be a good approach. In general, however, there are (at least) two downsides to static linking:
binary size. Imagine if you linked all your KDE programs statically, so you would essentially have hundreds of copies of all the KDE/QT libs on your system when you could have just one if you used shared libraries
update paths. Suppose that people find a security problem in a library x. With shared libraries, it's enough if you simply update the library once a patch is available. If all your applications were statically linked, you would have to wait for all of these developers to re-link and re-release their applications.

Should I create .a or .so when packaging my code as a library?

I have a software library and I used to create .a files, so that people can install them and link against them: g++ foo.o -L/path/to -llibrary
But now I often encounter third-party libraries where only .so files are available (instead of .a), and you just link against them without the -l switch, e.g. g++ foo.o /path/to/liblibrary.so.
What are the differences between these solutions? Should I prefer creating .so files for the users of my library?
Typically, libfoo.a is a static library, and libfoo.so is a shared library. You can use the same -L/-l linker options against either a static or shared. Or you can name the full path to the lib with static or shared. Often libraries are built both static and shared to provide application developers the choice of which they want.
All the code needed from a static lib is part of the final executable. This obviously makes it bigger, but it also means it's self-contained. Once it is compiled, you can run your app without the lib.
Code from a shared lib is not part of the executable. There are just some hooks in place to make the executable aware of the name of the lib it needs. In order to run your app, the shared lib has to be present in the lib search path (e.g. $LD_LIBRARY_PATH).
If you have two apps that share the same code, they can each link against a shared lib to keep the binary size down. If you want to upgrade parts of the app without rebuilding the whole thing, shared libs are good for that too.
Good overview of static, shared dynamic and loadable libraries at
http://www.yolinux.com/TUTORIALS/LibraryArchives-StaticAndDynamic.html
Some features that aren't really called out from comments I've seen so far.
Static linkage (.a/.lib)
Sharing memory between these compilation units is generally ok because they should(?will) all be using the same runtime.
Static linkage means you avoid 'dll hell' but the cost is recompilation to make use of any change at all. static linkage into Shared libraries (.so) can lead to strange results if you have more than 1 such shared library used by the final executable - global variables may exist multiple times and which one is used and when they are initialised can cause an entirely different hell.
The library will be part of the shipped product but obfuscated and not directly usable.
Shared/Dynamic libraries (.so/.dll)
Sharing memory between these compilation units can be hazardous as they may choose to use different runtime. This can mean you provide different Shared/Dynamic libraries based on the debug/release or single/multi threaded or...
Shared libraries (.so) are less prone to 'dll hell' then Dynamic libraries (.dll) as they include options for quite specific versioning.
Compiling against a .so will capture version information internal to the file (hard to fake) so that you get quite specific .so usage. Compiling against the .lib/.dll only gives a basic file name, any versioning is done managed by the developer (using naming or manually loading the library and checking version details by hand)
The library will have to ship with the final product (somebody else can pick it up and use it)
But now I often encounter third-party libraries where only .so files are available [...] and you just link against them without the -l switch, e.g. g++ foo.o /path/to/liblibrary.so.
JFYI, if you link to a shared library which does not have a SONAME set (compare with readelf -a liblibrary.so), you will end up putting the specified path of liblibrary.so into your target object (executable or another shared library), and which is usually undesired, for users have their own ideas of where to put a program and its associated files. The preferred way is to use -L/path/to -llibrary, perhaps together with -Wl,-rpath,/whatever/path/to if this is the final path (such pathing decisions are made by Linux distributions for example).
Should I prefer creating .so files for the users of my library?
If you distribute source code, the user will make the particular choice.