Program is part of the Xenomai test suite, cross-compiled from Linux PC into Linux+Xenomai ARM toolchain.
# echo $LD_LIBRARY_PATH
/lib
# ls /lib
ld-2.3.3.so libdl-2.3.3.so libpthread-0.10.so
ld-linux.so.2 libdl.so.2 libpthread.so.0
libc-2.3.3.so libgcc_s.so libpthread_rt.so
libc.so.6 libgcc_s.so.1 libstdc++.so.6
libcrypt-2.3.3.so libm-2.3.3.so libstdc++.so.6.0.9
libcrypt.so.1 libm.so.6
# ./clocktest
./clocktest: error while loading shared libraries: libpthread_rt.so.1: cannot open shared object file: No such file or directory
Is the .1 at the end part of the filename? What does that mean anyway?
Your library is a dynamic library.
You need to tell the operating system where it can locate it at runtime.
To do so,
we will need to do those easy steps:
Find where the library is placed if you don't know it.
sudo find / -name the_name_of_the_file.so
Check for the existence of the dynamic library path environment variable(LD_LIBRARY_PATH)
echo $LD_LIBRARY_PATH
If there is nothing to be displayed, add a default path value (or not if you wish to)
LD_LIBRARY_PATH=/usr/local/lib
We add the desired path, export it and try the application.
Note that the path should be the directory where the path.so.something is. So if path.so.something is in /my_library/path.so.something, it should be:
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/my_library/
export LD_LIBRARY_PATH
./my_app
Reference to source
Here are a few solutions you can try:
ldconfig
As AbiusX pointed out: If you have just now installed the library, you may simply need to run ldconfig.
sudo ldconfig
ldconfig creates the necessary links and cache to the most recent
shared libraries found in the directories specified on the command
line, in the file /etc/ld.so.conf, and in the trusted directories
(/lib and /usr/lib).
Usually your package manager will take care of this when you install a new library, but not always, and it won't hurt to run ldconfig even if that is not your issue.
Dev package or wrong version
If that doesn't work, I would also check out Paul's suggestion and look for a "-dev" version of the library. Many libraries are split into dev and non-dev packages. You can use this command to look for it:
apt-cache search <libraryname>
This can also help if you simply have the wrong version of the library installed. Some libraries are published in different versions simultaneously, for example, Python.
Library location
If you are sure that the right package is installed, and ldconfig didn't find it, it may just be in a nonstandard directory. By default, ldconfig looks in /lib, /usr/lib, and directories listed in /etc/ld.so.conf and $LD_LIBRARY_PATH. If your library is somewhere else, you can either add the directory on its own line in /etc/ld.so.conf, append the library's path to $LD_LIBRARY_PATH, or move the library into /usr/lib. Then run ldconfig.
To find out where the library is, try this:
sudo find / -iname *libraryname*.so*
(Replace libraryname with the name of your library)
If you go the $LD_LIBRARY_PATH route, you'll want to put that into your ~/.bashrc file so it will run every time you log in:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/path/to/library
Update
While what I write below is true as a general answer about shared libraries, I think the most frequent cause of these sorts of message is because you've installed a package, but not installed the -dev version of that package.
Well, it's not lying - there is no libpthread_rt.so.1 in that listing. You probably need to re-configure and re-build it so that it depends on the library you have, or install whatever provides libpthread_rt.so.1.
Generally, the numbers after the .so are version numbers, and you'll often find that they are symlinks to each other, so if you have version 1.1 of libfoo.so, you'll have a real file libfoo.so.1.0, and symlinks foo.so and foo.so.1 pointing to the libfoo.so.1.0. And if you install version 1.1 without removing the other one, you'll have a libfoo.so.1.1, and libfoo.so.1 and libfoo.so will now point to the new one, but any code that requires that exact version can use the libfoo.so.1.0 file. Code that just relies on the version 1 API, but doesn't care if it's 1.0 or 1.1 will specify libfoo.so.1. As orip pointed out in the comments, this is explained well at here.
In your case, you might get away with symlinking libpthread_rt.so.1 to libpthread_rt.so. No guarantees that it won't break your code and eat your TV dinners, though.
You need to ensure that you specify the library path during
linking when you compile your .c file:
gcc -I/usr/local/include xxx.c -o xxx -L/usr/local/lib -Wl,-R/usr/local/lib
The -Wl,-R part tells the resulting binary to also look for the library
in /usr/local/lib at runtime before trying to use the one in /usr/lib/.
Try adding LD_LIBRARY_PATH, which indicates search paths, to your ~/.bashrc file
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/path_to_your_library
It works!
The linux.org reference page explains the mechanics, but doesn't explain any of the motivation behind it :-(
For that, see Sun Linker and Libraries Guide
In addition, note that "external versioning" is largely obsolete on Linux, because symbol versioning (a GNU extension) allows you to have multiple incompatible versions of the same function to be present in a single library. This extension allowed glibc to have the same external version: libc.so.6 for the last 10 years.
cd /home/<user_name>/
sudo vi .bash_profile
add these lines at the end
LD_LIBRARY_PATH=/usr/local/lib:<any other paths you want>
export LD_LIBRARY_PATH
Another possible solution depending on your situation.
If you know that libpthread_rt.so.1 is the same as libpthread_rt.so then you can create a symlink by:
ln -s /lib/libpthread_rt.so /lib/libpthread_rt.so.1
Then ls -l /lib should now show the symlink and what it points to.
I had a similar error and it didn't fix with giving LD_LIBRARY_PATH in ~/.bashrc .
What solved my issue is by adding .conf file and loading it.
Go to terminal an be in su.
gedit /etc/ld.so.conf.d/myapp.conf
Add your library path in this file and save.(eg: /usr/local/lib).
You must run the following command to activate path:
ldconfig
Verify Your New Library Path:
ldconfig -v | less
If this shows your library files, then you are good to go.
running:
sudo ldconfig
was enough to fix my issue.
I had this error when running my application with Eclipse CDT on Linux x86.
To fix this:
In Eclipse:
Run as -> Run configurations -> Environment
Set the path
LD_LIBRARY_PATH=/my_lib_directory_path
Wanted to add, if your libraries are in a non standard path, run ldconfig followed by the path.
For instance I had to run:
sudo ldconfig /opt/intel/oneapi/mkl/2021.2.0/lib/intel64
to make R compile against Intel MKL
All I had to do was run:
sudo apt-get install libfontconfig1
I was in the folder located at /usr/lib/x86_64-linux-gnu and it worked perfectly.
Try to install lib32z1:
sudo apt-get install lib32z1
If you are running your application on Microsoft Windows, the path to dynamic libraries (.dll) need to be defined in the PATH environment variable.
If you are running your application on UNIX, the path to your dynamic libraries (.so) need to be defined in the LD_LIBRARY_PATH environment variable.
The error occurs as the system cannot refer to the library file mentioned. Take the following steps:
Running locate libpthread_rt.so.1 will list the path of all the files with that name. Let's suppose a path is /home/user/loc.
Copy the path and run cd home/USERNAME. Replace USERNAME with the name of the current active user with which you want to run the file.
Run vi .bash_profile and at the end of the LD_LIBRARY_PATH parameter, just before ., add the line /lib://home/usr/loc:.. Save the file.
Close terminal and restart the application. It should run.
I got this error and I think its the same reason of yours
error while loading shared libraries: libnw.so: cannot open shared
object file: No such file or directory
Try this. Fix permissions on files:
cd /opt/Popcorn (or wherever it is)
chmod -R 555 * (755 if not ok)
I use Ubuntu 18.04
Installing the corresponding -dev package worked for me,
sudo apt install libgconf2-dev
Before installing the above package, I was getting the below error:
turtl: error while loading shared libraries: libgconf-2.so.4: cannot open shared object file: No such file or directory
I got this error and I think its the same reason of yours
error while loading shared libraries: libnw.so: cannot open shared object
file: No such file or directory
Try this. Fix permissions on files:
sudo su
cd /opt/Popcorn (or wherever it is)
chmod -R 555 * (755 if not ok)
chown -R root:root *
A similar problem can be found here.
I've tried the mentioned solution and it actually works.
The solutions in the previous questions may work. But the following is an easy way to fix it.
It works by reinstalling the package libwbclient
in fedora:
dnf reinstall libwbclient
You can read about libraries here:
https://domiyanyue.medium.com/c-development-tutorial-4-static-and-dynamic-libraries-7b537656163e
I am trying to use opencv in a project, and am running into problems 'installing' it. I have extracted the opencv files and have created a small test program:
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
int main(int argc, char **argv){
cv::Mat im=cv::imread((argc==2)? argv[1]: "testing.jpg",1);
if (im.empty()){
std::cout << "Cannot open image." << std::endl;
} else {
cv::imshow("image",im);
cv::waitKey(0);
}
return 0;
}
To compile the program I have used the command below:
g++ -I"../../PortableGit/opt/opencv/build/include/" -L"../../PortableGit/opt/opencv/build/x64/vc15/lib" main.cpp -lopencv_core -lopencv_highgui -o main
I get the errors below:
In file included from ../../PortableGit/opt/opencv/build/include/opencv2/core.hpp:3293:0,
from ../../PortableGit/opt/opencv/build/include/opencv2/highgui.hpp:46,
from ../../PortableGit/opt/opencv/build/include/opencv2/highgui/highgui.hpp:48,
from main.cpp:1:
../../PortableGit/opt/opencv/build/include/opencv2/core/utility.hpp:714:14: error: 'recursive_mutex' in namespace 'std' does not name
a type
typedef std::recursive_mutex Mutex;
^~~~~~~~~~~~~~~
../../PortableGit/opt/opencv/build/include/opencv2/core/utility.hpp:715:25: error: 'Mutex' is not a member of 'cv'
typedef std::lock_guard<cv::Mutex> AutoLock;
^~
../../PortableGit/opt/opencv/build/include/opencv2/core/utility.hpp:715:25: error: 'Mutex' is not a member of 'cv'
../../PortableGit/opt/opencv/build/include/opencv2/core/utility.hpp:715:34: error: template argument 1 is invalid
typedef std::lock_guard<cv::Mutex> AutoLock;
I believe that it has something to do with mingw binaries no longer being included with opencv. I am missing the opencv/build/x86/mingw directory.
My questions are:
How do I 'install' opencv and use it without also installing some sort of IDE and/or CMake? (I prefer to use vim and the command line.)
Once installed, what command do I use to compile and link a program with opencv?
Any help is appreciated.
Edit:
This appears to be a problem with GCC's implementation of threads on windows. Using mingw-w64 instead of mingw fixed the std::recursive_mutex issue, but now the linker cannot find the proper files.
/i686-w64-mingw32/bin/ld.exe: cannot find -lopencv_core
/i686-w64-mingw32/bin/ld.exe: cannot find -lopencv_highgui
After quite a bit of trying things out, this is what I got to work. Oddly, following the LINUX guide to install opencv worked better than the WINDOWS guide, even though I have a windows computer.
Guide to Installing OpenCV on Windows Without VS
Heads-up: This is a multi-step process, 3 separate tools are required. Be prepared for this to take a while.
Part 1: Get everything ready
Download MinGW-w64.
On the downloads page, click on the "MinGW-w64-builds" option. Do not click on the "win-builds" option.
The reason MinGW-w64 has to be used is because it is a newer version of the MinGW compiler suit that has been improved for windows. This means that it supports the posix thread system, where as the standard MinGW compiler only supports the win32 thread system. OpenCV relies on the posix thread system, necessitating the MinGW-w64 compiler.
Extract the MinGW-w64 zip folder to a directory. In my case its PortableGit/opt/MinGW-w64
At this point, you can add the MingGW-w64/mingw32/bin folder to your path. (Assuming that this won't cause any conflicts.) If you do so, you will not have to constantly specify the g++ executable directory to run it. This is up to your discretion.
Download an opencv release.
Do not download the package for windows, click the button that says "sources"
Extract the opencv sources zip folder to a directory. In my case its PortableGit/opt/opencv-4.3.0
Also download the opencv_contrib source files directly from the repository.
Extract that folder and place it inside the top level opencv folder: PortableGit/opt/opencv-4.3.0/opencv_contrib in my case.
Download CMake.
I downloaded the zip folder, but you can download the installer if you wish.
Extract the CMake zip folder if you downloaded that, or run the installer. I put my CMake folder here: PortableGit/opt/cmake-3.17.1-win32-x86
At this point, you can add the cmake-3.17.1-win32-x86/bin folder to your path. (Assuming that this won't cause any conflicts.) If you do so, you will not have to constantly specify the cmake executable directory to run it. This is up to your discretion.
Part 2: Build OpenCV
Navigate to the opencv directory and create a build folder and cd into it.
mkdir build && cd build
Run the following export commands.
export CC=/PortableGit/MinGW-w64/mingw32/bin/gcc.exe
export CXX=/PortableGit/MinGW-w64/mingw32/bin/g++.exe
This is to make sure the next cmake command uses the proper compilers.
Run the following cmake command from within that folder:
PortableGit/opt/cmake-3.17.1-win32-x86/cmake.exe -G "MinGW Makefiles" -DCMAKE_BUILD_TYPE=Release -DOPENCV_VS_VERSIONINFO_SKIP=1 -DOPENCV_EXTRA_MODULES_PATH="/PortableGit/opt/opencv-4.3.0/opencv_contrib/modules/" ..
The -G flag specifies that we are creating build files for the MinGW compiler
The -DCMAKE_BUILD_TYPE=Release specifies that we are building the release version of opencv and not the debug version.
The DOPENCV_EXTRA_MODULES_PATH needs to be set to the modules folder inside the opencv_contrib folder. For me it was PortableGit/opt/opencv-4.3.0/opencv_contrib/modules
The DOPENCV_VS_VERSIONINFO_SKIP specifies to not include version info. If not set, the compiler will throw an error complaining about not having version files. (Shown below for reference.)
gcc: error: long: No such file or directory
mingw32-make[2]: *** [modules\core\CMakeFiles\opencv_core.dir\build.make:1341:
modules/core/CMakeFiles/opencv_core.dir/vs_version.rc.obj] Error 1
If successful, the cmake command will finish like this:
Now run this command, again from the build folder: /PortableGit/opt/MinGW-w64/mingw32/bin/mingw32-make.exe -j7
mingw32-make.exe is the windows equivalent of the Linux make command.
The -j7 option run the process with a maximum of 7 threads.
This will take a while! It took my laptop ~20 minutes to complete
If the make command ends in an error, make sure to reset your build directory before continuing any troubleshooting. This is done through this series of commands
rm -rf build
mkdir build
cd build
Part 3: Using OpenCV
To use the opencv library that you just compiled in a project of your own, compile the project with these flags from your projects main directory.
Remember that your compiler now has to be set to the mingw-w64 compiler for opencv support.
I added indentation and newlines for readability, but when entering this in the terminal do not include the newlines or indents.
The number at the end of the linker options may change depending on the version of opencv you downloaded. I downloaded opencv-4.3.0, making my number 430, but yours may be different.
PortableGit/opt/MinGW-w64/bin/g++.exe
-I../../PortableGit/opt/opencv-4.3.0/include/
-I../../PortableGit/opt/opencv-4.3.0/build/
-I../../PortableGit/opt/opencv-4.3.0/modules/core/include/
-I../../PortableGit/opt/opencv-4.3.0/modules/calib3d/include/
-I../../PortableGit/opt/opencv-4.3.0/modules/dnn/include/
-I../../PortableGit/opt/opencv-4.3.0/modules/features2d/include/
-I../../PortableGit/opt/opencv-4.3.0/modules/flann/include/
-I../../PortableGit/opt/opencv-4.3.0/modules/gapi/include/
-I../../PortableGit/opt/opencv-4.3.0/modules/highgui/include/
-I../../PortableGit/opt/opencv-4.3.0/modules/imgcodecs/include/
-I../../PortableGit/opt/opencv-4.3.0/modules/imgproc/include/
-I../../PortableGit/opt/opencv-4.3.0/modules/ml/include/
-I../../PortableGit/opt/opencv-4.3.0/modules/objdetect/include/
-I../../PortableGit/opt/opencv-4.3.0/modules/photo/include/
-I../../PortableGit/opt/opencv-4.3.0/modules/stitching/include/
-I../../PortableGit/opt/opencv-4.3.0/modules/ts/include/
-I../../PortableGit/opt/opencv-4.3.0/modules/video/include/
-I../../PortableGit/opt/opencv-4.3.0/modules/videoio/include/
-I../../PortableGit/opt/opencv-4.3.0/modules/world/include/
-L../../PortableGit/opt/opencv-4.3.0/build/lib/
*.hpp
*.cpp
-lopencv_calib3d430
-lopencv_core430
-lopencv_dnn430
-lopencv_features2d430
-lopencv_flann430
-lopencv_highgui430
-lopencv_imgcodecs430
-lopencv_imgproc430
-lopencv_ml430
-lopencv_objdetect430
-lopencv_photo430
-lopencv_stitching430
-lopencv_video430
-lopencv_videoio430
-o
main
Or you could download VS. Up to you. Hope this helps.
Correction for JackCamichael's answer
those 2 commands won't work in Windows
export CC=/PortableGit/MinGW-w64/mingw32/bin/gcc.exe
export CXX=/PortableGit/MinGW-w64/mingw32/bin/g++.exe
This should be
setx -m CC C:\msys64\mingw64\bin\gcc.exe
setx -m CXX C:\msys64\mingw64\bin\g++.exe
C:\msys64\mingw64\bin is mingw64 path on my machine
I am trying to compile tensorflow with a custom clang/llvm toolchain and using clang's native libc++ (instead of borrowing Gcc's stdlibc++).
It looks like bazel plain assumes that every clang will use Gcc's libraries because I get these errors:
$ bazel build --cxxopt=-std=c++11 --cxxopt=-stdlib=libc++ tensorflow:libtensorflow.so
INFO: Found 1 target...
INFO: From Compiling
external/protobuf/src/google/protobuf/compiler/js/embed.cc [for host]:
external/protobuf/src/google/protobuf/compiler/js/embed.cc:37:12:
warning: unused variable 'output_file' [-Wunused-const-variable]
const char output_file[] = "well_known_types_embed.cc";
^
1 warning generated.
ERROR: /home/hbucher/.cache/bazel/_bazel_hbucher/ad427c7fddd5b68de5e1cfaa7cd8c8cc/external/com_googlesource_code_re2/BUILD:11:1: undeclared inclusion(s) in rule '#com_googlesource_code_re2//:re2':
this rule is missing dependency declarations for the following files included by 'external/com_googlesource_code_re2/re2/bitstate.cc':
'/home/hbucher/install/include/c++/v1/stddef.h'
'/home/hbucher/install/include/c++/v1/__config'
I tried to hack into tools/cpp/CROSSTOOL inside bazel as some posts suggested to add the line
cxx_builtin_include_directory: "/home/hbucher/install/include/c++/v1"
but to no avail, it does not seem to make any difference.
Then I tried to follow a bazel tutorial to create a custom toolchain. The text does not help much because they are actually writing a cross tool while what I am trying to do is to tweak the existing host rules and somehow bazel seems to undo every attempt I try to tweak its parameters.
I have got to the point that is currently in my github repository https://github.com/HFTrader/BazelCustomToolchain
However it does not compile and I cannot even figure out how to start debugging this message.
$ bazel build --crosstool_top=#hbclang//:toolchain tensorflow:libtensorflow.so
.....................
ERROR: The crosstool_top you specified was resolved to
'#hbclang//:toolchain', which does not contain a CROSSTOOL file. You can
use a crosstool from the depot by specifying its label.
INFO: Elapsed time: 2.216s
I have appended these lines to my tensorflow/WORKSPACE
new_local_repository(
name="hbclang",
path="/home/hbucher/BazelCustomToolchain",
build_file = "/home/hbucher/BazelCustomToolchain/BUILD",
)
I have asked this question on bazel's google groups but they redirected me to stackoverflow. At this point I am about to give up.
Have someone attempted to do this or I'm breaking ground here?
Thank you.
Solved. Not in the intended way but it works for me.
export INSTALL_DIR="$HOME/install"
export CC=$INSTALL_DIR/bin/clang
export CXX=$INSTALL_DIR/bin/clang++
export CXXFLAGS="-stdlib=libc++ -L$INSTALL_DIR/lib"
export LDFLAGS="-L$INSTALL_DIR/lib -lm -lrt"
export LD_LIBRARY_PATH="/usr/lib:/lib/x86_64-linux-gnu/:$INSTALL_DIR/lib"
git clone https://github.com/tensorflow/tensorflow.git tensorflow-github
cd tensorflow-github
mkdir build-tmp && cd build-tmp
cmake ../tensorflow/contrib/cmake/
make -j4
Easy as 1-2-3 with cmake
[2020-05-24: Edit to make the answer up to date.]
TLDR: To build a project with Bazel with a specific Clang binary, and with libc++, this works for me (where INSTALL_DIR is where I've installed llvm):
CC="$INSTALL_DIR/bin/clang" \
BAZEL_CXXOPTS="-stdlib=libc++:-isystem$INSTALL_DIR/include" \
BAZEL_LINKOPTS="-stdlib=libc++" \
BAZEL_LINKLIBS="-L$INSTALL_DIR/lib:-Wl,-rpath,$INSTALL_DIR/lib:-lc++:-lm" \
bazel test //...
Background:
You can use --repo_env option, e.g. --repo_env=CC=clang, to put these defaults into your project- or system-wide .bazelrc.
This approach uses Bazel's C++ toolchain autoconfiguration which doesn't attempt to declare all the toolchain inputs in BUILD files. This is to simplify the configuration for the user. Therefore whenever you modify the C++ toolchain in a way that Bazel cannot know about (rebuild llvm etc.), you have to run bazel clean --expunge to flush the cache and rerun the autoconfiguration the next time.
The robust solution to specifying C++ toolchain in Bazel is to use the CcToolchainConfigInfo. See the documentation at https://docs.bazel.build/versions/master/tutorial/cc-toolchain-config.html and https://docs.bazel.build/versions/master/cc-toolchain-config-reference.html.
I'm trying to figure out a simplified way for installing C and C++ code
and libraries built from them, primarily for Linux. I think GNU Autotools, CMake, etc. would be overkill for what I'm trying to do.
An example: I've written some code which, when 'make' is run, builds to
create a library 'example.a'. (I'm going with static linking thus far.
Shared libraries will introduce still more issues; I'll crawl before I walk.) Use of the functions requires including 'example1.h', 'example2.h', etc.
With Autotools, one would run 'make install' or 'sudo make install' to
copy example.a to /usr/local/lib and the include files to /usr/local/include.
If executables had been built, those would get copied to /usr/local/bin.
I've tried adding the following lines to my makefile:
install:
cp example.a /usr/local/lib
cp example1.h /usr/local/include
cp example2.h /usr/local/include
cp example /usr/local/bin
This seems to work, mostly. Other projects are then able to "see"
the .h files and build correctly. But the .a file is not found; the only
way to get it to be linked is to explicitly give the path as
/usr/local/lib/example.a. (Though gcc insists it's looking for libraries
in that path.) So:
Question 1: Should my user-built libraries be put in /usr/local/lib? Or am
I using the wrong directory?
Now, my next problem: this has to be done with 'sudo make install'.
I'm thinking that for some projects of this ilk, where only one user will
be making use of the code in question, the libraries should go someplace
such as $HOME/lib, include files to $HOME/include, executables to
$HOME/bin.
Question 2: Is there a "standard" way to install includes/libraries for
just the current user, rather than doing so for all users? (Thus avoiding
the need for root use, and I'd think it would be a "cleaner" system if only
the user using the program had to deal with these files.)
You should probably use install to copy with explicit permissions. example.a probably isn't being found because it doesn't start with lib (a stupid requirement, I know).
Putting your lib, include, and bin under $HOME/.local is probably the closest thing to a per-user standard setup.
Edit:
I just tested it and it /usr/local/lib works fine for me:
foo.c
#include <stdio.h>
void foo() { puts("foo"); }
Compile and install:
gcc -c foo.c
ar crs libfoo.a foo.o
install -m 0755 /usr/local/lib #0644 should do too
In a different directory:
main.c
void foo(void);
int main() { foo(); return 0; }
Compile, link, and run:
gcc main.c -lfoo
./a.out
I do already ask a quiet similar question but in fact I now change my mind.
Id like like to compile proftpd and add a copy of the library it uses to the choosen installation directory.
Let's say I define a prefix in my compilation like:
/usr/local/proftpd
Under this directory I would like to find and use those directories only :
./lib
./usr/lib
./usr/bin
./usr/.....
./etc
./var/log/proftpd
./bin
./sbin
./and others I will not put the whole list
So the idea is after I have all libraries and config file in my main directory I could tar it and send it on another server with the same OS and without installing all the dependencies of protfpd I could use it.
I know it does sound like a windows installer not using shared library but that's in fact exactly what I'm trying to accomplish.
So far I have manage to compile it on AIX using this command line:
./configure --with-modules=mod_tls:mod_sql:mod_sql_mysql:mod_sql_passwd:mod_sftp:mod_sftp_sql --without-getopt --enable-openssl --with-includes=/opt/freeware/include:/opt/freeware/include/mysql/mysql/:/home/poney2/src_proftpd/libmath_header/ --with-libraries=/opt/freeware/lib:/opt/freeware/lib/mysql/mysql/:/home/poney2/src_proftpd/libmath_lib --prefix=/home/poney/proftpd_bin --exec-prefix=/home/poney/proftpd_bin/proftpd
Before trying to ask me why I'm doing so, it's because I have to compile proftpd on IBM AIX with almost all modules and this is not available on the IBM rpm binary repositories.
The use of this LDFLAG
LDFLAGS="-Wl,-blibpath:/a/new/lib/path"
where /a/new/lib/path contains all your library does work with Xlc and Gcc compiler.