Compile a library in a way that it's shippable - c++

I've a question regarding compilation of a dependency (namely poppler). Poppler is a dependency of my app which is an electron app.
It's working fine on windows (because npm package node-poppler ships the poppler windows build). But on a mac / linux I have to install poppler dependencies by myself (e.g. via homebrew) and then also everything works well.
Now I want to ship the app including poppler and therefore I've started to compile poppler by myself. This is working particularly and all dependencies that it needs I've added via homebrew and finally the build is running and it compiles successfully after long try and error.
But my problem now is, hat if I remove all homebrew dependencies after I've compiled poppler it stops working because there are "links ? (dont know the exact term)" to these dependencies.
e.g.
cmd: './extlib/darwin/poppler/poppler-gitlab/cmake-build-release/utils/pdfinfo -v',
stdout: '',
stderr: 'dyld[93398]: Library not loaded: /usr/local/opt/fontconfig/lib/libfontconfig.1.dylib\n' +
' Referenced from: <0062B574-27F1-35D4-BEF8-81E2F2B5EEDB> /Users/bernhard´/Coding/Sides/backend/extlib/darwin/poppler/poppler-gitlab/cmake-build-release/libpoppler.126.0.0.dylib\n' +
" Reason: tried: '/usr/local/opt/fontconfig/lib/libfontconfig.1.dylib' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/usr/local/opt/fontconfig/lib/libfontconfig.1.dylib' (no such file), '/usr/local/opt/fontconfig/lib/libfontconfig.1.dylib' (no such file), '/usr/local/lib/libfontconfig.1.dylib' (no such file), '/usr/lib/libfontconfig.1.dylib' (no such file, not in dyld cache)\n"
After a lot of reading I've found out that it might be possible to compile it static (even unsure if I'm using the right term here).
I'd like to know what I neet to do, that I can ship the app and what's required, to build poppler in a way that is has no dependencies anymore or has it's dependencies in a place that I have control over and can ship it with my app.
Actually I'm building using following CMAKE options:
-G Ninja -DCMAKE_BUILD_TYPE=Release -DC_MAKE_PROGRAM=/Applications/CLion.app/Contents/bin/ninja/mac/ninja -DCMAKE_FIND_FRAMEWORK=LAST -DCMAKE_VERBOSE_MAKEFILE=ON -DPOPPLER_QT5=OFF -DPOPPLER_ENABLE_CMS=none -DPOPPLER_ENABLE_SPLASH=OFF -DPOPPLER_ENABLE_UTILS=OFF -DPOPPLER_ENABLE_GLIB=OFF -DPOPPLER_ENABLE_CPP=OFF -DPOPPLER_ENABLE_XPDF_HEADERS=OFF -DPOPPLER_ENABLE_ZLIB=OFF -DPOPPLER_ENABLE_LIBCURL=OFF -DPOPPLER_ENABLE_LIBSYSTEMD=OFF -DPOPPLER_ENABLE_OPENJPEG=OFF -DPOPPLER_ENABLE_TIFF=OFF
Thanks for any help and sorry for maybe misusing terms as I'm mostly a frontend dev.

You have to compile your program with an RPATH option for your shipped libraries. The reason why you can't run it without those deleted dependencies is simple. (Regarding RPATH check out the ld man pages.)
Your application seems to be linking against the same version of the dependency as the one in your homebrew directory. Thus this shared object is compatible with your executable.
Your linker is only configured to look for the shared objects within certain directories. Thus when you delete the directory, your linker doesn't know where to find these shared objects anymore (because they do not exist).
On linux distributions you could run your application for example with: LD_LIBRARY_PATH=./path_to_your/library/ && ./your_executable. And it would suddenly work.
RPATH solves this by telling the linker where to look for the linked dependencies.
I would like to give you a general warning here. This is not the way to go when it comes to shipping your executable (and especially libraries) on different distros. It's better to make a package, using the available -dev packages for the given distribution. - Because otherwise you are slowly introducing dependency hell for the developers who wish to use it!
To answer your question regarding static linking: You will increase the size of your executable, so keep that in your mind if you wish to take this path instead

Related

Fixing error: *** No rule to make target '/usr/lib/x86_64-linux-gnu/libdl.so'

I have recently upgraded my OS (to PopOS! 22.04) and now a bunch of builds in my cmake workflow aren't compiling, halting at this particular error at the linking stage:
*** No rule to make target '/usr/lib/x86_64-linux-gnu/libdl.so'
This file now no longer exists. There is however a libdl.so.2.
Running apt-file search /usr/lib/x86_64-linux-gnu/libdl.so gives no output.
How can I get my builds working again?
EDIT:
The solution turned out to be due to manually built dependencies/packages that obviously weren't updated during the upgrade. I had to go and rebuild them, and then the error disappeared. Note, when rebuilding them (also with CMake) it required a full deletion of the build directory, not just running CMake again.
Jammy's GNU C Libarry version is 2.35. The dl library is now part of the C standard library. The release notes, tells that, starting from version 2.34,
all functionality formerly
implemented in the libraries libpthread, libdl, libutil, libanl has
been integrated into libc. New applications do not need to link with
-lpthread, -ldl, -lutil, -lanl anymore. For backwards compatibility,
empty static archives libpthread.a, libdl.a, libutil.a, libanl.a are
provided, so that the linker options keep working. Applications which
have been linked against glibc 2.33 or earlier continue to load the
corresponding shared objects (which are now empty).
This means that you have to remove the explicit libdl.so from the linker dependencies.

How to package c++ dependencies on linux

I'm developing a c++ program on Ubuntu 16.04 using cmake, compiling with g++5 and clang++-3.8.
Now I'd like to make this Program availabile for 14.04, too, but as I'm using a lot of c++14 features I can't just recompile it on that system. Instead, I wanted to ask if/how it is possible to package all dependencies (in particular the c++ standard library) in a way that I can just unpack a folder on the target system and run the app.
Ideally I'm looking for some automated/scripted solution that I can add to my cmake build.
Bonus Question:
For now, this is just a simple command line program for which I can easily recompile all 3rd party dependencies (and in fact I do). In the long run however, I'd also like to port a QT application. Ideally the solution would also work for that scenario.
The worst part of your contitions is an incompatible standard library.
You have to link it statically anyway (see comments to your answer).
A number of options:
Completely static linking:
I think it's easiest way for you, but it requires that you can build (or get by any way) all third-party libs as static. If you can't for some reason it's not your option.
You just build your app as usual and then link it with all libs you need statically (see documentation for your compiler). Thus you get completely dependencies-free executable, it will work on any ABI-compatible system (you may need to check if x86 executable works on x86_64).
Partially static linking
You link statically everything you can and dynamically other. So you distribute all dynamic libs (*.so) along with you app (in path/to/app/lib or path/to/app/ folder), so you don't depend on system libraries. Create your deb package which brings all files into /opt or $HOME/appname folder. You have to load all dynamic libs either "by hand" or ask compiler to do it on linking stage (see documentation).
Docker container
I don't know much about it but I know exactly it requires that docker be installed on target system (not your option).
Useful links:
g++ link options
static linking manual
Finding Dynamic or Shared Libraries
There are similar docs for clang, google it.

How to use Qt app on tiny210 device?

I want to use a Qt app on a tiny210 device.
I installed Qt ( qt-everywhere-opensource-src.4.8.5 ) downloaded from here. I managed to compile a simple application for use on tiny210. The problem is that now when I try to run the app on the device, I get the following errors:
libc.so.6: version 'GLIBC_2.15' not found (required by libQtCore.so.4)
libc.so.6: version 'GLIBC_2.15' not found (required by libQtNetwork.so.4)
There is a libc.so.6 in /lib/ on the target device, but it is version 2.11.
I should mention that before getting those errors I also got errors for not having libQtCore.so.4, libQtNetwork.so.4 and libQtGui.so.4. I fixed those errors just by copying the compiled libraries from my host PC to the device.
First question is: Would there have been a better way to provide the needed libraries, or copying them is fine?
Second question is: How can I get over the errors mentioned above?
EDIT : I've read something about building it static, but I am not sure how, and what are the downsides of this.
EDIT2 : I managed to get over the above errors thanks to artless noise's answer, but now I get: error loading shared libraries: libQtGui.so.4: cannot open shared object file: No such file or directory.
The issue is the cross-compiler (apt-get install gcc-arm-linux-gnueabi) is ARM based and this cross compiler has a newer glibc than on the ARM device. You can copy the libc from the cross compiler directory to your ARM device. I suggest testing with LD_LIBRARY_PATH, before updating the main libraries. Use ls /var/lib/dpkg/info/*arm-linux*.list to see most packages related to the ARM compiler. You can use grep to figure out where the libraries are (or fancier things like apt-file, etc).
Crosstool-ng has a populate script, but I dont see it in the Ubuntu packages; it is perfect for your issue. If it is present on your Debian version, I would use it.
The glibc 2.15 is backwards compatible with the glibc 2.11 which is currently on your system. Issues may arise if the compiler was configured with different options (different ABI); however if this is the case, you will have many issues with your built Qt besides the library. In this case, you need to find a better compiler which fits your root filesystem.
So to be clear, on the target
mkdir /lib/staging
cp libc.so-2.15 /lib/staging
cd /lib/staging
ln -s libc.so-2.15 libc.so
LD_LIBRARY_PATH=/lib/staging ls # test the library
You may have to copy additional libraries, such as pthread, resolv, rt, crypt, etc. The files are probably in a directory like sysroot/lib. You can copy the whole directory to the /lib/staging to test it. If the above ls functions, then the compilers should be ABI compatible. If you have a crash or not an executable, then the compiler and rootfs may not be compatible.
Would there have been a better way to provide the needed libraries, or copying them is fine?
Copying may be fine as per above. If it is not fine, then either the compiler or the root filesystem must be updated.
How can I get over the errors mentioned above?
Try the above method. As well, you maybe able to leave your root filesystem alone. Set-up a shadow directory and use chroot to run the Qt application with the copied files as another solution. To test this, make a very simple program and put it along the compiler libraries in a test directory, say /lib/staging as above. Then the test code can be run like,
$ LD_LIBRARY_PATH=/lib/staging ./hello_world
If this doesn't work, your compiler and the ARM file system/OS are not compatible. No library magic will help.
I've read something about building it static, but I am not sure how, and what are the downsides of this.
See Linux static linking is dead. I understand this seems like a solution. However, if the compiler is wrong, this won't help. The calling convention between OS, libraries and what registers are saved by the OS will be implicit in the compiled code. You may have to rebuild Qt with -softfp, etc.

How do you link to a library from the software manager on Linux?

So I recently got fed up with Windows and installed Linux Mint. I am trying to get a project to build I have in Code::Blocks. I have installed Code::Blocks but I need glew(as well as a few other libraries). I found it in the software manager and installed it. I've managed to locate and include the header files. But I feel like the next step should be relatively straightforward and all over the internet but (perhaps due to lack of proper terminology) I have been as of yet unable to locate an answer.
Do I need to locate the files on my system and link to each library manually? This is what I did on windows but I just downloaded the binaries and knew where they were. I found one library from the software manager and linked to it manually but it just feels like I'm doing it the wrong way. Since it's "installed" on the system is there some quick way to link?
You should use two flags for linker '-l' and '-L'. You can set these flags somewhere in project properties.
The first one '-l' tells linker to link with particular library. For example glew, probably in /usr/lib is a file named libglew.so, when you link your program with '-lglew' flag, it will link it with glew library. Linker looks for libraries in few standard places: /usr/lib, /usr/local/lib and few extra. If you have your libs in nonstandard place, use '-L' flag to point these dirs.
Many linux distributions provide two kinds of packages with libraries, regular ones just with runtime, and devel ones (usually prefixed or suffixed with dev or devel) with header files and development version of libraries.
use build systems, Luke!
the typical way to develop/build software in *nix world is 3 steps:
configure stage -- before building smth you have to realize in what environment you are going to build your software... is everything that required is installed... it wouldn't be good if at compile stage (after few hours of compilation) you (or user who build your soft) got an error: unable to #include the 'xxx.h'. the most popular build systems are: cmake, my favorite after autotools. yout may try also scons or maybe crazy (b)jam...
compile stage -- usually just make all
install stage -- deploy just built software into the system. or other way: build packages for target distro (.deb/.rpm/&etc)
at configuration stage using test scripts (don't worry there are plenty of them for various use cases) you can find all required headers/libraries/programs/compiler options/whatever you need to compile your package... and yes: do not use hardcoded paths in your Makefiles (or whatever you use to make your binaries)
Answer to this question really depends on what you want to achieve. If you want just to build you app by yourself then you can just write path to libraries in your makefile, or your code editor settings. You may not even have to do that as if libraries installed by your linux distribution package manager, headers usually go to /usr/include and libraries to /usr/lib or /urs/lib64 etc. That locations are standard and you do not need to specify them explicitly. Anyway you need to specify libraries you want to link to.
If you want to create application that can be build by others, or by you on many different configurations/environments using something like cmake would be very helpful.

Using -rpath and $ORIGIN with libtool-based projects?

I am trying to incorporate a libtool-based package into a project of my own, perhaps in a non-standard way. Here is my goal:
Build external project:
./configure --prefix=$HOME/blah --etcetera && make && make install
Build my own project which depends upon the external project's shared libraries and executables at runtime:
gcc -I$HOME/blah/include -L$HOME/blah/lib -o $HOME/blah/bin/program
Package everything into a single "localized" tarball... that is, while I have everything in $HOME/blah on the build host I want the ability to extract the tarball to any arbitrary directory (on some other host) without having to futz with my environment. The intent is to allow multiple versions of my project to coexist side-by-side without any nasty "cross-pollination".
I know that I can use -rpath '$ORIGIN/../lib' for my project to ensure that the right shared libraries always get loaded at runtime. However, it seems that libtool insists on assigning its own -rpath setting based on the exact path of $HOME/blah/lib, which breaks if I happen to untar everything into a different directory (say, for example, $HOME/blah.2011-06-02).
Is there a way around this limitation? I see a rather lengthy rpath discussion between debian and libtool folks on the topic, but it's somewhat old and inconclusive beyond "we disagree".
Among the options presented here on Rpathissue on the debian Wiki, using chrpath in your 'install' step or some post-processing script sounds like a viable option. (It's available on a bunch of distros via your favorite package manager.)
It doesn't require patching libtool which is a plus IMO.
Note that it has some limitations: can only save the new rpath if it's shorter (or same length) as the original one.
The other (pragmatic) option is to remove the rpath (chrpath can do that), and just have a wrapper script that sets LD_LIBRARY_PATH to whatever is necessary for your app. That has a chance of being slightly more portable too (if you handle the other shared library path environment vars some OSes have).