I'm working with Awesomium in linux, the SDK only provides a shared library: libawesomium-1.6.3.so. Some libraries on my machine have lower versions than what Awesomium requires:
$ ldd libawesomium-1.6.3.so
libawesomium-1.6.3.so: /usr/lib/libjpeg.so.62: no version information available
(required by libawesomium-1.6.3.so
So when I'm compiling with g++ -lawesomium-1.6.3 ... I will get errors like below:
libawesomium-1.6.3.so: undefined reference to 'jpeg_finish_output#LIBJPEG_6.2'
I know updating the jpeg library will solve the issue. But I don't have the root permission in the linux machine.
So I'm wondering whether there is a way to specify a relative path to a new libjpeg.so for awesomium-1.6.3.so to use.
Update (cannot comment on the answers):
I tried to add the -L/path/to/new/libjpeg.8 -ljpeg flags, the following warning shows up:
/usr/bin/ld: warning: libjpeg.so.62, needed by libawesomium-1.6.3.so, may conflict
with libjpeg.so.8
And the compiling still fails. I think the issue is, libjpeg is referenced indirectly by libawesomium, not directly by my code.
Use the -L option. But use it before -ljpeg!
When compiling, use -L option as fge said. But to run it, you will have to add the path to your library to LD_LIBRARY_PATH environment variable (see §3.3.1 here).
Related
As part of a research project I'm trying to use clang 6.0.1 with Xcode 9.4.1. I've built and installed clang in a custom location (/opt/llvm-6_0_1/clang). I wrote a simple xcplugin compiler specification to integrate my clang version with Xcode.
Now I can open projects in Xcode, select my proxy compiler and use it to build instead of Apple's default clang.
There were some minor additions that I had to make to the xcplugin's xcspec file to get this to work that probably won't be interesting to most people, so I won't provide the details here unless asked.
This all works with most of the projects I've played with, but I'm running into an odd problem where an implicitly linked static library cannot be found by my copy of clang. Specifically I get this error:
ld: file not found: /opt/llvm-6_0_1/clang/Toolchains/LLVM6.0.1.xctoolchain/usr/lib/arc/libarclite_macosx.a
clang-6.0: error: linker command failed with exit code 1 (use -v to see invocation)
Note that the libarclite_macosx.a file is not explicitly included by the Xcode project. I figured it must be implicitly included, perhaps because this project enables ARC?
After pouring over the Xcode generated link command line (it's complex) I decided to look at the MyProject__dependency_info.dat file, which is passed in via the -dependency_info option. Apparently this data file (the path is defined as env var LD_DEPENDENCY_INFO_FILE) is created during the linking process, not as an input to the linker. Perhaps it exists because of a hack workaround using symlinks that I used to get a link to work (described at the end).
In any case the format appears to be binary, but I was able to see a text reference to libarclite_macosx.a in the file:
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/arc/libarclite_macosx.a
After enabling the -Xlinker -v option I could see that my built clang was not searching the default toolchain lib or arc paths so I added them:
-L/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib
-L/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/arc
Now I can see the search paths in the verbose output, but clang still cannot find the library:
Library search paths:
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/arc
I've tried adding the paths to the frameworks search paths. I also tried defining the various link path env vars. Nothing has worked.
To try to get a sense of what clang is actually doing, I used fs_usage while getting the link error:
sudo fs_usage -e -w -f filesys | grep "lib/arc"
14:11:00.461965 stat64 [ 2] /opt/llvm-6_0_1/clang/Toolchains/LLVM6.0.1.xctoolchain/usr/lib/arc>>>>>>>>>>>>>>>>>>>>>> 0.000006 ld.1421614
14:11:00.461968 stat64 [ 2] /opt/llvm-6_0_1/clang/Toolchains/LLVM6.0.1.xctoolchain/usr/lib/arc>>>>>>>>>>>>>>>>>>>>>> 0.000002 ld.1421614
Clearly clang really wants to look for this file in the installed location, not the location indicated in the -dependency_info, nor in the search paths that I'm providing.
At this stage the only way I can get a build to work is to add a symlink to Xcode's "arc" directory to my installed clang lib directory. That "works", but is fragile and nasty.
Any thoughts as to how how I can get clang find the static library where it actually lives?
There is a laptop on which I have no root privilege.
onto the machine I have a library installed using configure --prefix=$HOME/.usr .
after that, I got these files in ~/.usr/lib :
libXX.so.16.0.0
libXX.so.16
libXX.so
libXX.la
libXX.a
when I compile a program that invokes one of function provided by the library with this command :
gcc XXX.c -o xxx.out -L$HOME/.usr/lib -lXX
xxx.out was generated without warning, but when I run it error like this was thrown:
./xxx.out: error while loading shared libraries: libXX.so.16: cannot open shared object file: No such file or directory , though libXX.so.16 resides there.
my clue-less assumption is that ~/.usr/lib wasn't searched when xxx.out is invoked.
but what can I do to specify path of .so , in order that xxx.out can look there for .so file?
An addition is when I feed -static to gcc, another error happens like this:
undefined reference to `function_proviced_by_the_very_librar'
It seems .so does not matter even though -L and -l are given to gcc.
what should I do to build a usable exe with that library?
For other people who has the same question as I did
I found a useful article at tldp about this.
It introduces static/shared/dynamic loaded library, as well as some example code to use them.
There are two ways to achieve that:
Use -rpath linker option:
gcc XXX.c -o xxx.out -L$HOME/.usr/lib -lXX -Wl,-rpath=/home/user/.usr/lib
Use LD_LIBRARY_PATH environment variable - put this line in your ~/.bashrc file:
export LD_LIBRARY_PATH=/home/user/.usr/lib
This will work even for a pre-generated binaries, so you can for example download some packages from the debian.org, unpack the binaries and shared libraries into your home directory, and launch them without recompiling.
For a quick test, you can also do (in bash at least):
LD_LIBRARY_PATH=/home/user/.usr/lib ./xxx.out
which has the advantage of not changing your library path for everything else.
Should it be LIBRARY_PATH instead of LD_LIBRARY_PATH.
gcc checks for LIBRARY_PATH which can be seen with -v option
I have a few related questions about my issues with compilation for embedded system. My questions are not only about HOW to do something, but more about WHY, because I have solutions for my problems (but maybe there are better ones?), but have no idea why some things works in some conditions, and does not work in others. I already spent some time with this, but until yesterday I was doing things a little blindly, with trials and errors, and without knowing what I was doing. Time to stop that! Please, help.
Scenario
I want to develop an application for Xilinx’s Zynq ARM processor, on Zedboard. The app will involve multithreading, some audio manipulation, and httpserver. So I will need pthread, alsa, sndfile and microhttpd libraries. I created rootfs with yocto. In original conf.local file I added/modified these lines:
BB_NUMBER_THREADS ?= "${#oe.utils.cpu_count()}"
PARALLEL_MAKE ?= "-j ${#oe.utils.cpu_count()}"
MACHINE ?= "zedboard-zynq7"
PACKAGE_CLASSES ?= "package_deb"
EXTRA_IMAGE_FEATURES = "debug-tweaks eclipse-debug"
IMAGE_INSTALL_append = "libgcc alsa-utils mpg123 libstdc++ sthttpd libmicrohttpd libsndfile1"
LICENSE_FLAGS_WHITELIST = "commercial_mpg123"
I also had to add some additional layers to bblayers.conf (and of course downloaded them):
meta-xilinx
meta-multimedia (from meta-openembedded)
meta-oe (from meta-openembedded)
meta-webserver (from meta-openembedded)
Lastly, I generated core-image-minimal with bitbake.
This, together with Linux kernel, and other stuff compiled separately, boots and works fine.
Problems
1. Simple app with this rootfs
It is app for Zynq, so I use XSDK, which is SDK from Xilinx, based on Eclipse. I created new Application project. In dialog window I chose Linux as platform, C++ as language, and I provided path to my unpacked rootfs (excactly the one that system boots with, via NFS). My rootfs path is /home/stas/ZedboardPetalinuxFS (it is not Petalinux, I just used to use it, and this folder name is still the same). This sets proper paths for library and headers search in rootfs.
I started with something very simple:
#include <pthread.h>
int main()
{
int i;
i = 1;
return 0;
}
I also added pthread library for linker (in Eclipse settings). Linking command at this point:
arm-linux-gnueabihf-g++ -L"/home/stas/ZedboardPetalinuxFS/usr/lib" -L"/home/stas/ZedboardPetalinuxFS/lib" -o "test.elf" ./src/main.o -lpthread
At this point it compiles. But it stops, when I add sndfile library
#include <sndfile.h>
This is reasonable, because this rootfs does not have all headers. I need to add another path for searching for headers. So I added path in yocto tmp folder, that has all the headers, that was needed for building rootfs. After I add it, it compiles again successfully. But problems started, when I added sndfile library for linking. Here is linking command and error:
arm-linux-gnueabihf-g++ -L"/home/stas/ZedboardPetalinuxFS/usr/lib" -L"/home/stas/ZedboardPetalinuxFS/lib" -o "test.elf" ./src/main.o -lpthread -lsndfile
/opt/Xilinx/SDK/2016.4/gnu/aarch32/lin/gcc-arm-linux-gnueabi/bin/../lib/gcc/arm-linux-gnueabihf/5.2.1/../../../../arm-linux-gnueabihf/bin/ld: cannot find -lsndfile
I looked to usr/lib to check if libsndfile.so is there, and I found only libsndfile.so.1 and ibsndfile.so.1.27. But it is also the case for pthread, and linker does not complain for that. I decided to create libsndfile.so by hand (I linked it to libsndfile.so.1). Linker stopped complaining about it, but started complaining about it’s dependencies. So I also creaded .so files for all the dependencies, and their dependencies, and added them for linking. Then it succeeded. At the end, linking command looked like this:
arm-linux-gnueabihf-g++ -L"/home/stas/ZedboardPetalinuxFS/usr/lib" -L"/home/stas/ZedboardPetalinuxFS/lib" -o "test.elf" ./src/main.o -lpthread -lvorbisenc -lvorbis -logg -lFLAC -lsndfile
So here goes the first question – why I did not needed .so file for pthread, but needed it for all other libraries? Or more general – when do I need .so file, and when .so.X file is enough?
2. Simple app - another approach
After the first try, I thought I should make another image, this time more suitable for development. Luckily, in Yocto it is quite easy – I just had to modify one line:
EXTRA_IMAGE_FEATURES = "debug-tweaks eclipse-debug dev-pkgs"
dev-pkgs option adds -dev packages for all installed packages.
So now I have rootfs with all needed headers, and .so files pointing where they should.
Before compilation, I removed unnecessary Include path, leaving only the one from rootfs, and removed all the libraries, except pthread, and sndfile. But then I get new errors:
arm-linux-gnueabihf-g++ -L"/home/stas/ZedboardPetalinuxFS/usr/lib" -L"/home/stas/ZedboardPetalinuxFS/lib" -o "test.elf" ./src/main.o -lsndfile -lpthread
/opt/Xilinx/SDK/2016.4/gnu/aarch32/lin/gcc-arm-linux-gnueabi/bin/../lib/gcc/arm-linux-gnueabihf/5.2.1/../../../../arm-linux-gnueabihf/bin/ld: cannot find /lib/libpthread.so.0
makefile:48: polecenia dla obiektu 'test.elf' nie powiodły się (commands for ‘test.elf’ did not succeed)
/opt/Xilinx/SDK/2016.4/gnu/aarch32/lin/gcc-arm-linux-gnueabi/bin/../lib/gcc/arm-linux-gnueabihf/5.2.1/../../../../arm-linux-gnueabihf/bin/ld: cannot find /usr/lib/libpthread_nonshared.a
I spotted, that it looks for libraries in my root folder. Quick search in Google (and SO:)) told me that I should set –-sysroot variable. So I added it to Eclipse option (in Miscelenious card in Linker options) like that:
--sysroot=/home/stas/ZedboardPetalinuxFS
So now linker command looked like this:
arm-linux-gnueabihf-g++ -L"/home/stas/ZedboardPetalinuxFS/usr/lib" -L"/home/stas/ZedboardPetalinuxFS/lib" --sysroot=/home/stas/ZedboardPetalinuxFS -o "test.elf" ./src/main.o -lsndfile -lpthread
And all succeed! I also wrote simple example that uses pthreads, and sndfile, and it also worked. But WHY? This leads me to second question:
Why do I need --sysroot option in this case? When do I need to use this option in general? And why this time I didn't have to add all the dependencies to linking command?
3. Another idea
At this point, I had an idea, to check what will happen, if I add --sysroot option having rootfs populated with old, non development image. But this gave me new errors:
arm-linux-gnueabihf-g++ -L"/home/stas/ZedboardPetalinuxFS/usr/lib" -L"/home/stas/ZedboardPetalinuxFS/lib" --sysroot=/home/stas/ZedboardPetalinuxFS -o "test.elf" ./src/main.o -lpthread -lvorbisenc -lvorbis -logg -lFLAC -lsndfile
/opt/Xilinx/SDK/2016.4/gnu/aarch32/lin/gcc-arm-linux-gnueabi/bin/../lib/gcc/arm-linux-gnueabihf/5.2.1/../../../../arm-linux-gnueabihf/bin/ld: cannot find crt1.o: No such file or directory
makefile:48: polecenia dla obiektu 'test.elf' nie powiodły się
/opt/Xilinx/SDK/2016.4/gnu/aarch32/lin/gcc-arm-linux-gnueabi/bin/../lib/gcc/arm-linux-gnueabihf/5.2.1/../../../../arm-linux-gnueabihf/bin/ld: cannot find crti.o: No such file or directory
/opt/Xilinx/SDK/2016.4/gnu/aarch32/lin/gcc-arm-linux-gnueabi/bin/../lib/gcc/arm-linux-gnueabihf/5.2.1/../../../../arm-linux-gnueabihf/bin/ld: cannot find -lpthread
/opt/Xilinx/SDK/2016.4/gnu/aarch32/lin/gcc-arm-linux-gnueabi/bin/../lib/gcc/arm-linux-gnueabihf/5.2.1/../../../../arm-linux-gnueabihf/bin/ld: cannot find -lm
So third question – what does this errors mean?
Thanks very much in advance!
"why I did not needed .so file for pthread, but needed it for all
other libraries?"
Actually you do need pthread.so file. You included pthread.h but didn't link with -lpthread. So it's normal you don't see any linker errors.
"when do I need .so file, and when .so.X file is enough"
When you give "-lNAME" parameter to g++, the compiler tells the linker to find libNAME.so within library search paths. Since there may exist multiple versions of the same library(libNAME.so.1, libNAME.so.1.20), *.so files link to desired actual library file. (Versioning of shared objects, ld man pages)
"Why do I need --sysroot option in this case? When do I need to use this option in general? And why this time I didn't have to add all the dependencies to linking command?"
The "dev-pkgs" in EXTRA_IMAGE_FEATURES changes your sysroot implicitly to let you link against the dev packages(yoctoproject image-features). That's why you need -sysroot option. You generally need this option when cross compiling to provide a root for standard search paths for headers and libraries. You didn't need it because you didn't have dev-pkgs image feature that changes your sysroot
"So third question – what does this errors mean?"
Even your the most basic hello world code gets linked with standard c library(if you didn't specify otherwise). libm.so, libpthread.so and crt1.o files are parts of libc library and come with libc dev package. So the linker can't see the standard library directories when it looks from your old sysroot
why I did not needed .so file for pthread, but needed it for all other libraries?
A cross compiler will normally come with a C Runtime (including pthread), typically in a directory that is part of the cross compiler installation.
The linker has built in search paths for libraries. These are in respect to the sysroot, which would by default be set to search the cross compiler's own included target C Runtime. If you added any -L options it would search those first and then move on to these pre-defined directories.
When you linked against pthread it would have found at least libpthread.a in the cross compiler's library directory.
Or more general – when do I need .so file, and when .so.X file is enough?
Shared libraries in Linux typically have a major and a minor version number. Libraries are ABI compatible between different minor versions with the same major version, but not between major versions. Sometimes there are three levels of versions but the principal is similar.
When installing libraries it is common to install the actual file with the full name, eg. libmy.so.1.2, then provide symlinks to libmy.so.1 and libmy.so.
If you are linking an application can work with any library version then you would just specify the name, eg. -lmy. In that case you would need symlinks from libmy.so to libmy.so.1.
If you required a specific version you would put -l:libmy.so.1. The ':' indicates a literal file name.
Linker scripts may affect things and may result in specific versions being selected even when you do specify the short name.
Why do I need --sysroot option in this case? When do I need to use
this option in general?
What --sysroot does is prepend the given path onto all the search directories which would normally be used to search for includes and libraries. It is most useful when cross compiling (as you are doing now) to get the compiler and the linker to search inside the target root instead of the build host's own root.
If you have specified a sysroot you probably do not need to specify include paths via -I or linker paths via -L, assuming that the files are within their normal spots inside your target root.
And why this time I didn't have to add all the dependencies to linking command?
One possible scenario is that the first time, sndfile for statically rather than dynamically linked. This would happen if your first root image had only sndfile.a in the lib dir, or elsewhere on the search path. To then satisfy the requirements of sndfile.a you would also need to link the other libs.
When linking against sndfile.so the dependencies will automatically get loaded via the dynamic linking process.
That's just a working theory at present.
So third question – what does this errors mean?
They mean it cannot find even the C runtime library to link.
As described for the first question, it was previously finding the C runtime in the pre-defined search path (relative to the predefined sysroot) which located the C runtime supplied by the cross compiler.
You disturbed this by supplying your own sysroot. It was now only searching the target root. Since this target root filesystem did not have development libs installed, there was no C runtime there to find.
You are doing several things wrong:
looks like you are not using environment variables, but calling cross-compiler directly. So, instead of compiling with arm-linux-gnueabihf-g++ ..., you should do $CXX .... The CXX is the environment variable set by the yocto script to set environment for cross compilation. Using CXX, you do not need to manually pass --sysroot
You should not link directly to pthread library with -lpthread. You should use -pthread
I'm trying to compile my program and it returns this error :
usr/bin/ld: cannot find -l<nameOfTheLibrary>
in my makefile I use the command g++ and link to my library which is a symbolic link to my library located on an other directory.
Is there an option to add to make it work please?
To figure out what the linker is looking for, run it in verbose mode.
For example, I encountered this issue while trying to compile MySQL with ZLIB support. I was receiving an error like this during compilation:
/usr/bin/ld: cannot find -lzlib
I did some Googl'ing and kept coming across different issues of the same kind where people would say to make sure the .so file actually exists and if it doesn't, then create a symlink to the versioned file, for example, zlib.so.1.2.8. But, when I checked, zlib.so DID exist. So, I thought, surely that couldn't be the problem.
I came across another post on the Internets that suggested to run make with LD_DEBUG=all:
LD_DEBUG=all make
Although I got a TON of debugging output, it wasn't actually helpful. It added more confusion than anything else. So, I was about to give up.
Then, I had an epiphany. I thought to actually check the help text for the ld command:
ld --help
From that, I figured out how to run ld in verbose mode (imagine that):
ld -lzlib --verbose
This is the output I got:
==================================================
attempt to open /usr/x86_64-linux-gnu/lib64/libzlib.so failed
attempt to open /usr/x86_64-linux-gnu/lib64/libzlib.a failed
attempt to open /usr/local/lib64/libzlib.so failed
attempt to open /usr/local/lib64/libzlib.a failed
attempt to open /lib64/libzlib.so failed
attempt to open /lib64/libzlib.a failed
attempt to open /usr/lib64/libzlib.so failed
attempt to open /usr/lib64/libzlib.a failed
attempt to open /usr/x86_64-linux-gnu/lib/libzlib.so failed
attempt to open /usr/x86_64-linux-gnu/lib/libzlib.a failed
attempt to open /usr/local/lib/libzlib.so failed
attempt to open /usr/local/lib/libzlib.a failed
attempt to open /lib/libzlib.so failed
attempt to open /lib/libzlib.a failed
attempt to open /usr/lib/libzlib.so failed
attempt to open /usr/lib/libzlib.a failed
/usr/bin/ld.bfd.real: cannot find -lzlib
Ding, ding, ding...
So, to finally fix it so I could compile MySQL with my own version of ZLIB (rather than the bundled version):
sudo ln -s /usr/lib/libz.so.1.2.8 /usr/lib/libzlib.so
Voila!
If your library name is say libxyz.so and it is located on path say:
/home/user/myDir
then to link it to your program:
g++ -L/home/user/myDir -lxyz myprog.cpp -o myprog
There does not seem to be any answer which addresses the very common beginner problem of failing to install the required library in the first place.
On Debianish platforms, if libfoo is missing, you can frequently install it with something like
apt-get install libfoo-dev
The -dev version of the package is required for development work, even trivial development work such as compiling source code to link to the library.
The package name will sometimes require some decorations (libfoo0-dev? foo-dev without the lib prefix? etc), or you can simply use your distro's package search to find out precisely which packages provide a particular file.
(If there is more than one, you will need to find out what their differences are. Picking the coolest or the most popular is a common shortcut, but not an acceptable procedure for any serious development work.)
For other architectures (most notably RPM) similar procedures apply, though the details will be different.
Compile Time
When g++ says cannot find -l<nameOfTheLibrary>, it means that g++ looked for the file lib{nameOfTheLibrary}.so, but it couldn't find it in the shared library search path, which by default points to /usr/lib and /usr/local/lib and somewhere else maybe.
To resolve this problem, you should either provide the library file (lib{nameOfTheLibrary}.so) in those search paths or use -L command option. -L{path} tells the g++ (actually ld) to find library files in path {path} in addition to default paths.
Example: Assuming you have a library at /home/taylor/libswift.so, and you want to link your app to this library. In this case you should supply the g++ with the following options:
g++ main.cpp -o main -L/home/taylor -lswift
Note 1: -l option gets the library name without lib and .so at its beginning and end.
Note 2: In some cases, the library file name is followed by its version, for instance libswift.so.1.2. In these cases, g++ also cannot find the library file. A simple workaround to fix this is creating a symbolic link to libswift.so.1.2 called libswift.so.
Runtime
When you link your app to a shared library, it's required that library stays available whenever you run the app. In runtime your app (actually dynamic linker) looks for its libraries in LD_LIBRARY_PATH. It's an environment variable which stores a list of paths.
Example: In case of our libswift.so example, dynamic linker cannot find libswift.so in LD_LIBRARY_PATH (which points to default search paths). To fix the problem you should append that variable with the path libswift.so is in.
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/taylor
During compilation with g++ via make define LIBRARY_PATH if it may not be appropriate to change the Makefile with the -Loption. I had put my extra library in /opt/lib so I did:
$ export LIBRARY_PATH=/opt/lib/
and then ran make for successful compilation and linking.
To run the program with a shared library define:
$ export LD_LIBRARY_PATH=/opt/lib/
before executing the program.
First, you need to know the naming rule of lxxx:
/usr/bin/ld: cannot find -lc
/usr/bin/ld: cannot find -lltdl
/usr/bin/ld: cannot find -lXtst
lc means libc.so, lltdl means libltdl.so, lXtst means libXts.so.
So, it is lib + lib-name + .so
Once we know the name, we can use locate to find the path of this lxxx.so file.
$ locate libiconv.so
/home/user/anaconda3/lib/libiconv.so # <-- right here
/home/user/anaconda3/lib/libiconv.so.2
/home/user/anaconda3/lib/libiconv.so.2.5.1
/home/user/anaconda3/lib/preloadable_libiconv.so
/home/user/anaconda3/pkgs/libiconv-1.14-0/lib/libiconv.so
/home/user/anaconda3/pkgs/libiconv-1.14-0/lib/libiconv.so.2
/home/user/anaconda3/pkgs/libiconv-1.14-0/lib/libiconv.so.2.5.1
/home/user/anaconda3/pkgs/libiconv-1.14-0/lib/preloadable_libiconv.so
If you cannot find it, you need to install it by yum (I use CentOS). Usually you have this file, but it does not link to right place.
Link it to the right place, usually it is /lib64 or /usr/lib64
$ sudo ln -s /home/user/anaconda3/lib/libiconv.so /usr/lib64/
Done!
ref: https://i-pogo.blogspot.jp/2010/01/usrbinld-cannot-find-lxxx.html
When you compile your program you must supply the path to the library; in g++ use the -L option:
g++ myprogram.cc -o myprogram -lmylib -L/path/foo/bar
I had this problem with compiling LXC on a fresh VM with Centos 7.8. I tried all the above and failed. Some suggested removing the -static flag from the compiler configuration but I didn't want to change anything.
The only thing that helped was to install glibc-static and retry. Hope that helps someone.
Check the location of your library, for example lxxx.so:
locate lxxx.so
If it is not in the /usr/lib folder, type this:
sudo cp yourpath/lxxx.so /usr/lib
Done.
Apart from the answers already given, it may also be the case that the *.so file exists but is not named properly. Or it may be the case that *.so file exists but it is owned by another user / root.
Issue 1: Improper name
If you are linking the file as -l<nameOfLibrary>
then library file name MUST be of the form lib<nameOfLibrary>
If you only have <nameOfLibrary>.so file, rename it!
Issue 2: Wrong owner
To verify that this is not the problem - do
ls -l /path/to/.so/file
If the file is owned by root or another user, you need to do
sudo chown yourUserName:yourUserName /path/to/.so/file
Here is Ubuntu information of my laptop.
lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.2 LTS
Release: 18.04
Codename: bionic
I use locate to find the .so files for boost_filesystem and boost_system
locate libboost_filesystem
locate libboost_system
Then link .so files to /usr/lib and rename to .so
sudo ln -s /usr/lib/x86_64-linux-gnu/libboost_filesystem.so.1.65.1 /usr/lib/libboost_filesystem.so
sudo ln -s /usr/lib/x86_64-linux-gnu/libboost_system.so.1.65.1 /usr/lib/libboost_system.so
Done! R package velocyto.R was successfully installed!
This error may also be brought about if the symbolic link is to a dynamic library, .so, but for legacy reasons -static appears among the link flags. If so, try removing it.
The library I was trying to link to turned out to have a non-standard name (i.e. wasn't prefixed with 'lib'), so they recommended using a command like this to compile it -
gcc test.c -Iinclude lib/cspice.a -lm
I encountered the same error message.
I built the cmocka as a so and tried to link it to my executable.
But ld always complains below:
/usr/bin/ld: cannot find -lcmocka
It turns out that there are 3 files generated after cmocka is built:
libcmocka.so
libcmocka.so.0
libcmocka.so.0.7.0
1 and 2 are symbol links and only 3 is the real file.
I only copied the 1 to my library folder, where ld failed to find the 3.
After I copied all 3, ld works.
I am attempting to move my module to Linux Apache 2.4 and I am having linking issues. On windows a libhttpd.lib is available to link against as well as the apr/apr-util libraries. lib* httpd apr and aprutil are all statically linked on my windows installation. I want to do the same for the Linux installation.
According to the limited documentation available I am unable to use APXS because my module is written in C++.
I am having difficulties finding the archive files for the server on Linux. What do I need to link against for my module to work?
The source is able to link and execute on a Windows host.
Sample errors:
/home/ec2-user/httpd-2.4.2/srclib/apr/locks/unix/proc_mutex.c:367: undefined reference to `pthread_mutexattr_init'
/home/ec2-user/httpd-2.4.2/srclib/apr/locks/unix/proc_mutex.c:374: undefined reference to `pthread_mutexattr_setpshared'
/home/ec2-user/httpd-2.4.2/srclib/apr/locks/unix/proc_mutex.c:384: undefined reference to `pthread_mutexattr_setrobust_np'
/home/ec2-user/httpd-2.4.2/srclib/apr/locks/unix/proc_mutex.c:393: undefined reference to `pthread_mutexattr_setprotocol'
/home/ec2-user/httpd-2.4.2/srclib/apr/locks/unix/proc_mutex.c:414: undefined reference to `pthread_mutexattr_destroy'
/home/ec2-user/httpd-2.4.2/srclib/apr/locks/unix/proc_mutex.c:408: undefined reference to `pthread_mutexattr_destroy'
Thanks
These errors means you didn't add -pthread to your compile command line, so you're not getting the pthread library linked in.
(Note: it's -pthread, not -lpthread - it's not just a linker option.)
For those building Apache modules in C++ and want dynamic linking, here's the g++ command line I used to successfully build a module; briefly tested on Apache 2.2.22/CentOS 6.2.
g++ [my files].cpp -I/httpd/include/ -I/httpd/srclib/apr/include/
-I/httpd/srclib/apr-util/include/ -I/usr/include/ -I/usr/include/apr-1/
-I/httpd/os/unix/ -shared -fPIC -o mod_mymodule.so
I'm an Apache/linux programming noob and was unable to find this info anywhere else; thanks to the OP's solution I was able to finish the job after a few days of frustration.
Here's also a link which helped explain how to work around the 'unresolved reference' linker issue when it couldn't find the apache functions contained within the httpd server core code (libhttpd.lib on Windows) -- which doesn't exist in *nix, unless you make it manually like the OP did. Basically, the answer was to use the -shared flag so these references are automagically resolved at run time.
http://www.yolinux.com/TUTORIALS/LibraryArchives-StaticAndDynamic.html
Don't forget the 'extern C' in your module code, and build in DSO support when building HTTPD.
Hope this helps someone else!
So anyone else searching for this may get the answer.
Download whatever Apache version (higher than 2.2) source from the apache httpd site to the home directory
Unpack
Configure with everything or simply just specify no options (You can
reconfigure later for the actual build)
perform a make on the source ( don't do a make install yet! just need the object files)
create a directory that will hold your libraries and source for your
own module. Say "foo" at the top level of your home dir (or any you
desire)
assuming home dir: execute (find ~/httpdX.X -name '*.o' -exec cp {} ~/foo \;) The command is in parenthesis. This will copy all compiled objects to foo.
execute (find ~/foo -name '*.o' -exec ar r libhttpd.a {} \;) which
will create an archive of the code for you.
Then just include these switches in your compile definition for gcc
or g++ (-Wl,-Bstatic -lhttpd -lpcre -lpcreposix -lapr-1 -laprutil-1
-Wl,-Bdynamic -pthread -ldl -lcrypt)
Clean up your foo directory if you like by running rm on the *.o
files
You do not need to compile statically like I needed to, but I wanted to be able to move my module to any Linux host without worry about the necessary components. Apache needs pcre(regex), apr(all libraries), threads (proc/thread mutex), dl(dynamic loading), and crypt(apr password) to work. Since thread, dl, and crypt will most like already be on the machine, I chose to not compile them statically.
Happy hunting. I hope my never ending story over 3 days helps someone else!