I'm building a shared object file from a group of C++ source files using GCC. All the example tutorials on building .so files show the file created with a version number after the .so suffix. For example:
gcc -shared -Wl,-soname,libmean.so.1 -o libmean.so.1.0.1 calc_mean.o
This would produce the .so file libmean.so.1.0.1
Additionally, if I browse the /usr/lib directory on my local machine, I see that many of the .so files have version numbers at the end.
However, when I compile a shared object file and place it in /usr/lib, the linker is unable to find it if I put a version number at the end. If I remove the version number, it works fine. I really don't care about putting a version number or not, I just don't understand why this seems to be a common convention, and yet this causes the shared library to not work with the linker. So, what's going on here? Why is there a convention to place the version number at the end of an .so file name?
The version number is appended so you can have multiple incompatible library versions coexisting in the system. You should increment the major version number (the number in soname) every time you change the API in an incompatible way (assuming the previous version is installed and used in the system, of course).
The 2nd and 3rd numbers in the file name allows for multiple minor revisions of the library in the system, switchable system-wide with a simple symlink update.
At link time you can give the .so file name as a linker argument, instead of -l option. ldd is smart enough to extract the soname from it, the binary linked this way uses it to find the library.
For example, let's compile the library and test binary using it:
czajnik#czajnik:~/z$ gcc -shared -Wl,-soname,libtest.so.2 -o libtest.so.2.3.4 a.c
czajnik#czajnik:~/z$ gcc -o test b.c -L. ./libtest.so.2.3.4
You can use ldd to verify, that the binary now looks for libtest.so.2:
czajnik#czajnik:~/z$ LD_LIBRARY_PATH=. ldd ./test
linux-gate.so.1 => (0x002c1000)
libtest.so.2 => not found
libc.so.6 => /lib/libc.so.6 (0x00446000)
/lib/ld-linux.so.2 (0x00a28000)
It obviously can't find it, but that's what the symlink is for:
czajnik#czajnik:~/z$ ln -s libtest.so.2.3.4 libtest.so.2
czajnik#czajnik:~/z$ LD_LIBRARY_PATH=. ldd ./test
linux-gate.so.1 => (0x00d75000)
libtest.so.2 => ./libtest.so.2 (0x00e31000)
libc.so.6 => /lib/libc.so.6 (0x00a5e000)
/lib/ld-linux.so.2 (0x00378000)
Update: All of the above is true, yet I wasn't myself aware of the meaning of the 3rd component of the version number. Until recently, I believed it is simply a patch number (or similar thing). Wrong! For libtool it has a special meaning.
The 3rd component turned out to be age field, which says how many major versions are backward compatible with the current one.
Recommended reading:
Idiot's Guide to ABI Versioning
Libtool's versioning system
Related
Recently I'm configuring a new project in language C and C++ using both CMake and llvm Clang with Visual Studio Code for a program where EVERY exact object or library it generates or depends needs specifying explicitly to make or link against. The program by design should feature high RAM efficency, security for the least vulnerabilities induced by the unused data segments and cross-compilation.
That being said, the provision libraries for architecture, application binary interface (aka ABIs), standard libraries functions from C and C++ standard and executable binary need manual setup for toolchain program to process in appropriate manner.
Well, the libstdc++ substitution with libc++ library is what I think a good "entry point" to kick-off, but definitely not straight forward, which you might have learned enough from documented painful practices elsewhere and think of that as a bad idea...but I'm sorry, this has become my obsession. I really appreciate your help before it drives me nuts here.
Anyway, here is a list of information about my local machine in below:
Ubuntu 20.04 # v5.13.0-40-generic kernel, SMP, x86_64 instruction-set
CMake 3.23.1, built from source
gcc-10:amd64
clang-12:amd64
libstdc++-10-dev:amd64
libc++-12-dev:amd64
Clang works fine to return from command:
jacob.nielson#ubuntu~$: clang++-12 -v -c -xc /dev/null
to keep the new project from any future unexpected and irrelevant issue as much as possible, I leave the project directory layout as simple as it could be before making a CMakeLists.txt:
To begin with, I looked up in CMake's document about either compiling (preprocessing) or linking options:
-"-nostdinc"
-"-nostdinc++"
-"-nobuiltininc"
-"-nostdlib"
-"-nolibc"
-"-nostdlib++"
-"-nodefaultlibs"
Then I come up with a few simple lines in the CMakeLists.txt:
cmake_minimum_required(VERSION 3.23.0)
project(program VERSION 0.0.1)
set(TARGET program)
add_compile_options(-std=c++17)
# I'm not sure about semantics of "-nostdinc++", "-nostdinc" and "-nobuiltininc".
# Should I add them all in case to substitute libstdc++ with libc++ or not in below?
add_compile_options($<$<COMPILE_LANG_AND_ID:CXX,Clang>:-nobuiltininc>)
add_compile_options($<$<COMPILE_LANG_AND_ID:CXX,Clang>:-nostdinc++>)
add_compile_options($<$<COMPILE_LANG_AND_ID:CXX,Clang>:-nostdinc>)
# Should I add path to system headers in "-isystem" option?
# Attention: cmake-generator-expression does not recognize white-space separated options group to be string. It is option "-isystem" where the path to directory as value follows without white-space in this example.
add_compile_options(
$<$<COMPILE_LANG_AND_ID:CXX,Clang>:-isystem/usr/include/x86_64-linux-gnu>
$<$<COMPILE_LANG_AND_ID:CXX,Clang>:-isystem-after/usr/include>
# "-cxx-isystem" option is legacy and is not appropriate for standard headers. Add path to the subdirectories where you put your own c++-only headers in your project.
$<$<COMPILE_LANG_AND_ID:CXX,Clang>:-cxx-isystem ${...}>
)
# If C or/and C++ standard library is turned off like in above, Should I add path to libc++ headers in "-stdlib++-isystem" option?
# Attention: cmake-generator-expression does not recognize white-space separated options group to be string. It is option "-stdlib++-isystem" where the path to directory as value follows without white-space in this example.
add_compile_options($<$<COMPILE_LANG_AND_ID:CXX,Clang>:-stdlib++-isystem/usr/lib/llvm-12/include/c++/v1>)
# Replace default C++ standard with libc++.
add_link_options(
$<$<LINK_LANG_AND_ID:CXX,Clang>:-stdlib=libc++>
)
add_executable(${TARGET} "${CMAKE_CURRENT_SOURCE_DIR}/source/${TARGET}.cpp")
target_link_libraries(${TARGET} PRIVATE
$<$<LINK_LANG_AND_ID:CXX,Clang>:c++>
$<$<LINK_LANG_AND_ID:CXX,Clang>:c++abi>
$<$<LINK_LANG_AND_ID:CXX,Clang>:unwind>
$<$<LINK_LANG_AND_ID:CXX,Clang>:ssl>
$<$<LINK_LANG_AND_ID:CXX,Clang>:crypto>
$<$<LINK_LANG_AND_ID:CXX,Clang>:pthread>
)
Next, I tried to generate "Unix Makefiles" for project:
CMake parsed successfully:
Meanwhile it generates CMakeCCompiler.cmake and CMakeCXXCompiler.cmake in ./build/CMakeFiles/3.23.1/ temporary folder, the later contains a few lines:
set(CMAKE_CXX_IMPLICIT_INCLUDE_DIRECTORIES "/usr/include/c++/10;/usr/include/x86_64-linux-gnu/c++/10;/usr/include/c++/10/backward;/usr/local/include;/usr/lib/llvm-12/lib/clang/12.0.0/include;/usr/include/x86_64-linux-gnu;/usr/include")
set(CMAKE_CXX_IMPLICIT_LINK_LIBRARIES "stdc++;m;gcc_s;gcc;c;gcc_s;gcc")
set(CMAKE_CXX_IMPLICIT_LINK_DIRECTORIES "/usr/lib/gcc/x86_64-linux-gnu/10;/usr/lib/x86_64-linux-gnu;/usr/lib64;/lib/x86_64-linux-gnu;/lib64;/usr/lib;/usr/lib/llvm-12/lib;/lib")
set(CMAKE_CXX_IMPLICIT_LINK_FRAMEWORK_DIRECTORIES "")
Note: Clang uses heuristic discovery to search headers in directories which are reported by front-end.
This is inappropriate because I just don't want either headers or libraries of libstdc++ to be implicitly used within the project.
Also, it exports compile commands into compile_commands.json as below:
/usr/bin/clang++-12 -DLIBCXXABI_USE_COMPILER_RT=YES -DLIBCXX_USE_COMPILER_RT=YES -I/data/solution/projects/program/include -std=c++17 -stdlib=libc++ -isystem /usr/include/x86_64-linux-gnu -stdlib++-isystem /usr/lib/llvm-12/include/c++/v1 -o CMakeFiles/program.dir/source/program.cpp.o -c /data/solution/projects/program/source/program.cpp
It all looks good. But when I use ldd to check:
jacob.nielson#ubuntu:/data/solution/projects/program/build/debug$ ldd program
linux-vdso.so.1 (0x00007ffee7190000)
libc++.so.1 => /lib/x86_64-linux-gnu/libc++.so.1 (0x00007f69164a5000)
libc++abi.so.1 => /lib/x86_64-linux-gnu/libc++abi.so.1 (0x00007f691646d000)
libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f691628b000) /* LIBSTDC++ in here! */
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f691613c000) /* LIBM in here */
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f6916121000) /* gcc startup file in here! */
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f6915f2f000) /* LIBC in here! */
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f6915f0a000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f6915f00000)
libatomic.so.1 => /lib/x86_64-linux-gnu/libatomic.so.1 (0x00007f6915ef6000)
/lib64/ld-linux-x86-64.so.2 (0x00007f691658d000)
It is frustrating to see all libstdc++, libc, libm or libgcc_s remain. Is there any option I missed to link them statically such like -static-libc in GNU CC or do I have to give up my ambition early at this point?
The answer to the question is hereby noted.
There is a mistake about missing definitions to instruct Clang to use compiler_rt. The problem is solved once I had add to relevant options.
As to replace libstsdc++ with libc++, one shall not forget to add the following options:
add_compile_definitions(
$<$<COMPILE_LANG_AND_ID:CXX,Clang>:LLVM_ENABLE_RUNTIMES=libunwind>
$<$<COMPILE_LANG_AND_ID:CXX,Clang>:LIBCXX_USE_COMPILER_RT=YES>
$<$<COMPILE_LANG_AND_ID:CXX,Clang>:LIBCXXABI_USE_COMPILER_RT=YES>
$<$<COMPILE_LANG_AND_ID:CXX,Clang>:LIBUNWIND_USE_COMPILER_RT=YES>
)
add_link_opptions(
$<$<LINK_LANG_AND_ID:CXX,Clang>:-stdlib=libc++>
$<$<LINK_LANG_AND_ID:CXX,Clang>:-rtlib=compiler-rt>
)
AFAK, definitions above help to instruct clang "not to link against default libraries", but one also has to use them in conjunction with compile option "-stdlib=libc++" and especially NO "-nostdlib" for the replace to take effect. Part of the document covers in below:
Compiler runtime
The default runtime library is target-specific. For targets where GCC
is the dominant compiler, Clang currently defaults to using libgcc_s.
On most other targets, compiler-rt is used by default.
However, it is much unclear to me what target refers to here and how comes to tell a compiler is dominant for a given target.
I have a few related questions about my issues with compilation for embedded system. My questions are not only about HOW to do something, but more about WHY, because I have solutions for my problems (but maybe there are better ones?), but have no idea why some things works in some conditions, and does not work in others. I already spent some time with this, but until yesterday I was doing things a little blindly, with trials and errors, and without knowing what I was doing. Time to stop that! Please, help.
Scenario
I want to develop an application for Xilinx’s Zynq ARM processor, on Zedboard. The app will involve multithreading, some audio manipulation, and httpserver. So I will need pthread, alsa, sndfile and microhttpd libraries. I created rootfs with yocto. In original conf.local file I added/modified these lines:
BB_NUMBER_THREADS ?= "${#oe.utils.cpu_count()}"
PARALLEL_MAKE ?= "-j ${#oe.utils.cpu_count()}"
MACHINE ?= "zedboard-zynq7"
PACKAGE_CLASSES ?= "package_deb"
EXTRA_IMAGE_FEATURES = "debug-tweaks eclipse-debug"
IMAGE_INSTALL_append = "libgcc alsa-utils mpg123 libstdc++ sthttpd libmicrohttpd libsndfile1"
LICENSE_FLAGS_WHITELIST = "commercial_mpg123"
I also had to add some additional layers to bblayers.conf (and of course downloaded them):
meta-xilinx
meta-multimedia (from meta-openembedded)
meta-oe (from meta-openembedded)
meta-webserver (from meta-openembedded)
Lastly, I generated core-image-minimal with bitbake.
This, together with Linux kernel, and other stuff compiled separately, boots and works fine.
Problems
1. Simple app with this rootfs
It is app for Zynq, so I use XSDK, which is SDK from Xilinx, based on Eclipse. I created new Application project. In dialog window I chose Linux as platform, C++ as language, and I provided path to my unpacked rootfs (excactly the one that system boots with, via NFS). My rootfs path is /home/stas/ZedboardPetalinuxFS (it is not Petalinux, I just used to use it, and this folder name is still the same). This sets proper paths for library and headers search in rootfs.
I started with something very simple:
#include <pthread.h>
int main()
{
int i;
i = 1;
return 0;
}
I also added pthread library for linker (in Eclipse settings). Linking command at this point:
arm-linux-gnueabihf-g++ -L"/home/stas/ZedboardPetalinuxFS/usr/lib" -L"/home/stas/ZedboardPetalinuxFS/lib" -o "test.elf" ./src/main.o -lpthread
At this point it compiles. But it stops, when I add sndfile library
#include <sndfile.h>
This is reasonable, because this rootfs does not have all headers. I need to add another path for searching for headers. So I added path in yocto tmp folder, that has all the headers, that was needed for building rootfs. After I add it, it compiles again successfully. But problems started, when I added sndfile library for linking. Here is linking command and error:
arm-linux-gnueabihf-g++ -L"/home/stas/ZedboardPetalinuxFS/usr/lib" -L"/home/stas/ZedboardPetalinuxFS/lib" -o "test.elf" ./src/main.o -lpthread -lsndfile
/opt/Xilinx/SDK/2016.4/gnu/aarch32/lin/gcc-arm-linux-gnueabi/bin/../lib/gcc/arm-linux-gnueabihf/5.2.1/../../../../arm-linux-gnueabihf/bin/ld: cannot find -lsndfile
I looked to usr/lib to check if libsndfile.so is there, and I found only libsndfile.so.1 and ibsndfile.so.1.27. But it is also the case for pthread, and linker does not complain for that. I decided to create libsndfile.so by hand (I linked it to libsndfile.so.1). Linker stopped complaining about it, but started complaining about it’s dependencies. So I also creaded .so files for all the dependencies, and their dependencies, and added them for linking. Then it succeeded. At the end, linking command looked like this:
arm-linux-gnueabihf-g++ -L"/home/stas/ZedboardPetalinuxFS/usr/lib" -L"/home/stas/ZedboardPetalinuxFS/lib" -o "test.elf" ./src/main.o -lpthread -lvorbisenc -lvorbis -logg -lFLAC -lsndfile
So here goes the first question – why I did not needed .so file for pthread, but needed it for all other libraries? Or more general – when do I need .so file, and when .so.X file is enough?
2. Simple app - another approach
After the first try, I thought I should make another image, this time more suitable for development. Luckily, in Yocto it is quite easy – I just had to modify one line:
EXTRA_IMAGE_FEATURES = "debug-tweaks eclipse-debug dev-pkgs"
dev-pkgs option adds -dev packages for all installed packages.
So now I have rootfs with all needed headers, and .so files pointing where they should.
Before compilation, I removed unnecessary Include path, leaving only the one from rootfs, and removed all the libraries, except pthread, and sndfile. But then I get new errors:
arm-linux-gnueabihf-g++ -L"/home/stas/ZedboardPetalinuxFS/usr/lib" -L"/home/stas/ZedboardPetalinuxFS/lib" -o "test.elf" ./src/main.o -lsndfile -lpthread
/opt/Xilinx/SDK/2016.4/gnu/aarch32/lin/gcc-arm-linux-gnueabi/bin/../lib/gcc/arm-linux-gnueabihf/5.2.1/../../../../arm-linux-gnueabihf/bin/ld: cannot find /lib/libpthread.so.0
makefile:48: polecenia dla obiektu 'test.elf' nie powiodły się (commands for ‘test.elf’ did not succeed)
/opt/Xilinx/SDK/2016.4/gnu/aarch32/lin/gcc-arm-linux-gnueabi/bin/../lib/gcc/arm-linux-gnueabihf/5.2.1/../../../../arm-linux-gnueabihf/bin/ld: cannot find /usr/lib/libpthread_nonshared.a
I spotted, that it looks for libraries in my root folder. Quick search in Google (and SO:)) told me that I should set –-sysroot variable. So I added it to Eclipse option (in Miscelenious card in Linker options) like that:
--sysroot=/home/stas/ZedboardPetalinuxFS
So now linker command looked like this:
arm-linux-gnueabihf-g++ -L"/home/stas/ZedboardPetalinuxFS/usr/lib" -L"/home/stas/ZedboardPetalinuxFS/lib" --sysroot=/home/stas/ZedboardPetalinuxFS -o "test.elf" ./src/main.o -lsndfile -lpthread
And all succeed! I also wrote simple example that uses pthreads, and sndfile, and it also worked. But WHY? This leads me to second question:
Why do I need --sysroot option in this case? When do I need to use this option in general? And why this time I didn't have to add all the dependencies to linking command?
3. Another idea
At this point, I had an idea, to check what will happen, if I add --sysroot option having rootfs populated with old, non development image. But this gave me new errors:
arm-linux-gnueabihf-g++ -L"/home/stas/ZedboardPetalinuxFS/usr/lib" -L"/home/stas/ZedboardPetalinuxFS/lib" --sysroot=/home/stas/ZedboardPetalinuxFS -o "test.elf" ./src/main.o -lpthread -lvorbisenc -lvorbis -logg -lFLAC -lsndfile
/opt/Xilinx/SDK/2016.4/gnu/aarch32/lin/gcc-arm-linux-gnueabi/bin/../lib/gcc/arm-linux-gnueabihf/5.2.1/../../../../arm-linux-gnueabihf/bin/ld: cannot find crt1.o: No such file or directory
makefile:48: polecenia dla obiektu 'test.elf' nie powiodły się
/opt/Xilinx/SDK/2016.4/gnu/aarch32/lin/gcc-arm-linux-gnueabi/bin/../lib/gcc/arm-linux-gnueabihf/5.2.1/../../../../arm-linux-gnueabihf/bin/ld: cannot find crti.o: No such file or directory
/opt/Xilinx/SDK/2016.4/gnu/aarch32/lin/gcc-arm-linux-gnueabi/bin/../lib/gcc/arm-linux-gnueabihf/5.2.1/../../../../arm-linux-gnueabihf/bin/ld: cannot find -lpthread
/opt/Xilinx/SDK/2016.4/gnu/aarch32/lin/gcc-arm-linux-gnueabi/bin/../lib/gcc/arm-linux-gnueabihf/5.2.1/../../../../arm-linux-gnueabihf/bin/ld: cannot find -lm
So third question – what does this errors mean?
Thanks very much in advance!
"why I did not needed .so file for pthread, but needed it for all
other libraries?"
Actually you do need pthread.so file. You included pthread.h but didn't link with -lpthread. So it's normal you don't see any linker errors.
"when do I need .so file, and when .so.X file is enough"
When you give "-lNAME" parameter to g++, the compiler tells the linker to find libNAME.so within library search paths. Since there may exist multiple versions of the same library(libNAME.so.1, libNAME.so.1.20), *.so files link to desired actual library file. (Versioning of shared objects, ld man pages)
"Why do I need --sysroot option in this case? When do I need to use this option in general? And why this time I didn't have to add all the dependencies to linking command?"
The "dev-pkgs" in EXTRA_IMAGE_FEATURES changes your sysroot implicitly to let you link against the dev packages(yoctoproject image-features). That's why you need -sysroot option. You generally need this option when cross compiling to provide a root for standard search paths for headers and libraries. You didn't need it because you didn't have dev-pkgs image feature that changes your sysroot
"So third question – what does this errors mean?"
Even your the most basic hello world code gets linked with standard c library(if you didn't specify otherwise). libm.so, libpthread.so and crt1.o files are parts of libc library and come with libc dev package. So the linker can't see the standard library directories when it looks from your old sysroot
why I did not needed .so file for pthread, but needed it for all other libraries?
A cross compiler will normally come with a C Runtime (including pthread), typically in a directory that is part of the cross compiler installation.
The linker has built in search paths for libraries. These are in respect to the sysroot, which would by default be set to search the cross compiler's own included target C Runtime. If you added any -L options it would search those first and then move on to these pre-defined directories.
When you linked against pthread it would have found at least libpthread.a in the cross compiler's library directory.
Or more general – when do I need .so file, and when .so.X file is enough?
Shared libraries in Linux typically have a major and a minor version number. Libraries are ABI compatible between different minor versions with the same major version, but not between major versions. Sometimes there are three levels of versions but the principal is similar.
When installing libraries it is common to install the actual file with the full name, eg. libmy.so.1.2, then provide symlinks to libmy.so.1 and libmy.so.
If you are linking an application can work with any library version then you would just specify the name, eg. -lmy. In that case you would need symlinks from libmy.so to libmy.so.1.
If you required a specific version you would put -l:libmy.so.1. The ':' indicates a literal file name.
Linker scripts may affect things and may result in specific versions being selected even when you do specify the short name.
Why do I need --sysroot option in this case? When do I need to use
this option in general?
What --sysroot does is prepend the given path onto all the search directories which would normally be used to search for includes and libraries. It is most useful when cross compiling (as you are doing now) to get the compiler and the linker to search inside the target root instead of the build host's own root.
If you have specified a sysroot you probably do not need to specify include paths via -I or linker paths via -L, assuming that the files are within their normal spots inside your target root.
And why this time I didn't have to add all the dependencies to linking command?
One possible scenario is that the first time, sndfile for statically rather than dynamically linked. This would happen if your first root image had only sndfile.a in the lib dir, or elsewhere on the search path. To then satisfy the requirements of sndfile.a you would also need to link the other libs.
When linking against sndfile.so the dependencies will automatically get loaded via the dynamic linking process.
That's just a working theory at present.
So third question – what does this errors mean?
They mean it cannot find even the C runtime library to link.
As described for the first question, it was previously finding the C runtime in the pre-defined search path (relative to the predefined sysroot) which located the C runtime supplied by the cross compiler.
You disturbed this by supplying your own sysroot. It was now only searching the target root. Since this target root filesystem did not have development libs installed, there was no C runtime there to find.
You are doing several things wrong:
looks like you are not using environment variables, but calling cross-compiler directly. So, instead of compiling with arm-linux-gnueabihf-g++ ..., you should do $CXX .... The CXX is the environment variable set by the yocto script to set environment for cross compilation. Using CXX, you do not need to manually pass --sysroot
You should not link directly to pthread library with -lpthread. You should use -pthread
When I compile a c++ program using g++ from the command line and then do ldd a.out ldd is able to find libstdc++.a(libstdc++.so.6)
When I build a c++ ruby extension ldd myext.so cannot find libstdc++.a(libstdc++.so.6), and require 'myext' fails to load, with a complaint about not being able to find libstdc++.
If I run g++ -v I see the following output:
COLLECT_GCC=g++
COLLECT_LTO_WRAPPER=/big_long_path....
Target: powerpc-ibm-aix7.1.0.0
Configured with: ../gcc-4.8.2/configure .....
Thread model: aix
gcc version 4.8.2 (GCC)
Now if I set my LIBPATH to include that big_long_path
export LIBPATH=/big_long_path....:$LIBPATH
ldd myext.so
is able to find libstdc++ and my require 'myext' works (returns true)
This might be OK, but I would rather not have to have users muck with their LIBPATH. Is there something I can add to my Makefile that allow the generated myext.so to find libstdc++ (and libgcc) in the location pointed to by that big_long_path I see in the COLLECT_LTO_WRAPPER line that I see when I run g++ -v ?
Update
The first link in the accepted answer below really helped me understand what was going on, and I was able to get ldd to not complain about libstdc++ by adding -blibpath:big_long_path:/usr:/usr/lib to the LDFLAGS in the Makefile.
But for some reason when ruby tried to load the ext it still failed. This made me think that ruby was somehow adjusting the LIBPATH. In the end my solution was to put a symbolic link to the libstdc++ and libgcc_s in the lib directory of the ruby install. The thought was ruby must need to search for extension shared objects, so I figured I would take advantage of this and place these two libs in the path that ruby must search. The only thing I am wondering is if I should just copy the libstdc++ and libgcc_s rather than symbolically link them?
It looks like you build your own gcc.
This is a known issue that gcc does not pass -rpath to the linker to specify the locations of libstdc++ and libgcc_s.
You either need to pass that path manually to the linker, or configure your own gcc to do that for you via specs file.
Don't let the loader searching for the libraries, use option -Wl,-bipath; check the result with dump -X32_64 -H; you should see something like this:
INDEX PATH BASE MEMBER
0 /usr/local/lib:/usr/lib:/lib
1 /usr/local/lib libapr-1.so.0
2 /usr/local/lib libaprutil-1.so.0
3 /usr/local/lib libcrypto.so.1.0.1f
4 /usr/local/lib libexpat.so.1
5 /usr/local/lib libgcc_s.a shr.o
6 /usr/local/lib libiconv.so.2
7 /usr/local/lib libssl.so.1.0.1f
8 /usr/local/lib libcpotlas.so.1
9 /usr/lib librtl.a shr.o
10 /usr/lib libc.a shr.o
11 .
Also I have to say using C++ for plugin is a really bad idea, especially in exotic systems like AIX
I need to build two 3rd party shared libraries, so their .so files will be reused by other projects. However, after build one of these libraries contains hardcoded path to another. This path is invalid on other machines and causes linker warnings. How can I prevent the full path from being embedded in the resulting .so files?
Details:
First library source: ~/dev/A
Second library source: ~/dev/B
Both of them have configure script to generate make files. Library B depends on A. So, first I build A:
$ ~/dev/A/configure --prefix=~/dev/A-install
$ make && make install
Then I build B:
$ ~/dev/B/configure --prefix=~/dev/B-install --with-A=~/dev/A-install
$ make && make install
Then I want to upload the contents of ~/dev/A-install and ~/dev/B-install to our file server, so other teams and build machines can use the binaries. But they get linker warnings when they try to use B:
/usr/bin/ld: warning: libA.so.2, needed by /.../deps/B/lib/libB.so, not found (try using -rpath or -rpath-link)
When I run ldd libB.so it gives:
...
libA.so.2 => /home/alex/dev/A-install/lib/libA.so.2
Obviously this path exists only on my machine and cannot be found on other machines.
How can I remove full hardcoded path from libB.so?
Thanks.
You have to use --prefix value that will be valid in the runtime environment for both packages!
Than you override prefix or DESTDIR (prefix replaces the prefix, DESTDIR is prepended to it, but works more reliably) on the make command-line when installing. Like:
~/dev/A$ ./configure
~/dev/A$ make
~/dev/A$ make install prefix=~/dev/A-install
~/dev/B$ ./configure --with-A=~/dev/A-install
~/dev/B$ make
~/dev/B$ make install prefix=~/dev/B-install
or (which is preferred and is how all package-building tools use it):
~/dev/A$ ./configure
~/dev/A$ make
~/dev/A$ make install DESTDIR=~/dev/A-install
~/dev/B$ ./configure --with-A=~/dev/A-install/usr/local
~/dev/B$ make
~/dev/B$ make install prefix=~/dev/B-install
because this way you are installing to ~/dev/A-install/$prefix, so with default prefix ~/dev/A-install/usr/local. The advantage of this later option is, that if you redefine some specific installation paths without refering to prefix (say --sysconfdir=/etc), DESTDIR will still get prepended to it, while it won't be affected by prefix.
-Wl,-rpath,~/deps/A/lib:~/deps/B/lib:~/dev/MyApp/bin
This linker option is responsible for saving the path inside the library. You need somehow to remove this.
See with ./configure --help if there's some option that could influence this. Another option is to edit manually the makefile and remove this option.
== edit2 ==
One more thing
-L~/deps/A/lib -L~/deps/B/lib ~/deps/A/lib/libA.so ~/deps/B/lib/libB.so
If libA.so and libB.so don't have SONAME, linking them like "~/deps/A/lib/libA.so" will also cause saving the path. Soname is set using -Wl,-soname,<soname> linker option when building shared library.
If soname is set in the shared library, linking it using "~/deps/A/lib/libA.so" form is ok.
Like Jan mentioned in the comments, the better way is using "-Llibrary/path -llibrary_name" without rpath.
-L~/deps/A/lib -L~/deps/B/lib -lA -lB
When I run ldd libB.so it gives:
libA.so.2 => /home/alex/dev/A-install/lib/libA.so.2
The low-level solution to this problem is to use the option "-soname=libA.so" when you link the libA.so library. By having SONAME defined for a shared object, the linker will not embed absolute paths when linking against that shared object.
The OP is using "configure", so this isn't an easy solution to implement unless he is willing to go into the bowels of the Makefile generated by the configure script.
Shared libraries and executables have a list of directories to look for shared libraries, in addition to the list in the operating system's configuration. RPATH is used to add shared library search paths, used at runtime.
If you want a relative path to be used in RPATH, there is a special syntax that most Linux/UNIX (but not AIX) systems support - $ORIGIN or ${ORIGIN}.
$ORIGIN will expand at runtime to the directory where the binary resides - either the library or executable.
So if you put executable binaries in prefix/bin/ and shared libraries in prefix/lib/ you can add an entry to RPATH like ${ORIGIN}/../lib and this will expand at runtime to prefix/bin/../lib/
It's often a little trick to get the syntax correct in a Makefile, because you have to escape the $ in ORIGIN so it will not be expanded. It's typical to do this in a Makefile:
g++ -Wl,-rpath=\$${ORIGIN}/../lib
Both Make and the shell will want to look in your environment for a variable called ORIGIN - so it needs to be double-escaped.
I just got caught out thinking I had the same problem.
None of the above answers helped me.
Whenever I tried
ldd libB.so
I would get in the output:
libA.so.1 => /home/me/localpath/lib/libA.so.1.0
and so I thought I had a hardcoded path. Turns out that I had forgotten I had LD_LIBRARY_PATH set for local testing. Clearing LD_LIBRARY_PATH meant that ldd found the correct installed library in /usr/lib/
Perhaps using the -rpath and -soname options to ld could help (assuming a binutils or binutils.gold package for ld on a recent Linux system)?
I am building a C++ application that uses Intel's IPP library. This library is installed by default in /opt and requires you to set LD_LIBRARY_PATH both for compiling and for running your software (if you choose the shared library linking, which I did). I already modified my configure.ac/Makefile.am so that I do not need to set that variable when compiling, but I still can't find the shared library at run-time; how do I do that?
I'm compiling with the -Wl, -R/path/to/libdir flag using g++
Update 1:
Actually my binary program has some IPP libraries correctly linked, but just one is not:
$ ldd myprogram
linux-vdso.so.1 => (0x00007fffa93ff000)
libippacem64t.so.6.0 => /opt/intel/ipp/6.0.2.076/em64t/sharedlib/libippacem64t.so.6.0 (0x00007f22c2fa3000)
libippsem64t.so.6.0 => /opt/intel/ipp/6.0.2.076/em64t/sharedlib/libippsem64t.so.6.0 (0x00007f22c2d20000)
libippcoreem64t.so.6.0 => /opt/intel/ipp/6.0.2.076/em64t/sharedlib/libippcoreem64t.so.6.0 (0x00007f22c2c14000)
[...]
libiomp5.so => not found
libiomp5.so => not found
libiomp5.so => not found
Of course the library is there:
$ locate libiomp5.so
/opt/intel/ipp/6.0.2.076/em64t/sharedlib/libiomp5.so
By /path/to/lib do you mean path to the directory containing the library, or the path to the actual file?
The -R option given a directory argument is treated like -rpath by ld, which is the option you're actually wanting here. It adds the given directory to the runtime library search path. That should work, as long as you give it the directory and not filename. I'm fairly confident about that, having done it myself, and because it's one of the hints given by libtool:
Libraries have been installed in:
/path/to/library-directory
If you ever happen to want to link against installed libraries
in a given directory, LIBDIR, you must either use libtool, and
specify the full pathname of the library, or use the `-LLIBDIR'
flag during linking and do at least one of the following:
add LIBDIR to the `LD_LIBRARY_PATH' environment variable
during execution
add LIBDIR to the `LD_RUN_PATH' environment variable
during linking
use the `-Wl,-rpath -Wl,LIBDIR' linker flag
have your system administrator add LIBDIR to `/etc/ld.so.conf'
(I paste this here since conceivably one of the other options could be more desirable - for example LD_RUN_PATH can save you makefile modification)
As suggested by Richard Pennington, the missing library is not used directly by my application, but it is used by the shared libraries I use. Since I cannot recompile IPP, the solution to my problem is to add -liomp5 when compiling, using the -R option for the linker. This actually adds the rpath for libiomp5.so fixing the problem!
You can check if the path to the library is being picked up from your -R flag by running the ldd command or the readelf command on your binary. The LD_LIBRARY_PATH environment variable is an override, so shouldn't be necessary normally.
You should use the -R option if possible.
If not, rename your executable and create a launch script that runs your executable, and in there set LD_LIBRARY_PATH just for that scope.
Depending on platform, you can modify ld.so.conf via /etc/ld.so.conf.d (Redhat/Fedora come to mind) which makes deploying changes to ld.so "easier" from a deployment scenario.
Besides all the useful hints posted here.. you're not trying to use a 64-bit specific library on a 32-bit system (or viceversa, depending on other conditions), are you?
bash:
export LD_LIBRARY_PATH=/path/to/lib
tcsh:
setenv LD_LIBRARY_PATH /path/to/lib
Try configuring your ldconfig through ld.so.conf so it searches your /opt/... directory by default.