I have a software properly installed on Kubuntu.
Now, I am patching and testing some of its libraries.
How can I start the software from bash so that it loads my patched libraries instead of the official libs?
e.g.:
the official libs are locate in /usr/lib/
my patch libraries (used during test development) are in /home/user/dev/lib/
I tried:
$ set LD_LIBRARY_PATH=/home/user/dev/lib/
$ binary_app &
but to no avail.
I'd prefer a solution that can be set from the bash, but if it's not possible, I could also modify the cmake file of this C++ software.
The aim is to allow me to easily start the application either with the vanilla libs, or with my patched libs to see the differences.
Edit: it's a KDE .so file
The library I am testing is a KDE4 library. The official lib is in /usr/lib/kde4/ . In that directory, none of the library start with the lib prefix.
Whether I do:
/lib/ld-linux-x86-64.so.2 --list --library-path PATH EXEC
or
ldd EXEC
The library is not listed at all.
On the other hand, if if move the original library away from /usr/lib/kde4/, the application starts but the corresponding functionality is missing.
Are KDE4 libraries loaded in a specific way? Maybe the variable to set is different...
Edit 2
All the answers are good and useful... unfortunately, it turned out that the problem does not appear to be related to the lib path setting. I'm dealing with a plugin architecture and the .so loading path appears to be hard-coded somewhere in the application. I need to spend more time within the source code to understand what's happening... Thanks and +1 to all.
From 'man bash':
When a simple command other than a builtin or shell function is to
be executed, it is invoked in a
separate execution environment that
consists of the following. Unless
otherwise noted, the values are
inherited from the shell.
[....]
ยท shell variables and functions marked for export, along
with variables exported for the
command, passed in the environment
You need to 'export' a variable if it is to be seen by programs you execute.
However, you can also try the following:
/lib/ld-linux.so.2 --library-path PATH EXECUTABLE
See http://tldp.org/HOWTO/Program-Library-HOWTO/shared-libraries.html
Try export LD_LIBRARY_PATH=... instead of set.
I already put this in a comment but after thinking about it I think the best way to do this (using a different library just for testing/debugging) is using LD_PRELOAD, see What is the LD_PRELOAD trick?
From the man page:
LD_PRELOAD
A whitespace-separated list of additional, user-specified, ELF shared libraries to be loaded before all others. This can be used to selectively override functions in other shared libraries. For set-user-ID/set-group-ID ELF binaries, only libraries in the standard search directories that are also set-user-ID will be loaded.
Update:
After the updated question it seems the application is using dlopen to open the library using a absolute path. I don't think you can do anything about it. See man dlopen
Update2:
Maybe there is something you can do: you might be able to LD_PRELOAD your own dlopen function which modifies the path to your own library...
Isn't you app setuid or setgid by chance? In this case LD_LIBRARY_PATH will be ignored.
Put everything on one line:
LD_LIBRARY_PATH=foo binary_app&
Related
I am building an application using OpenGL. I have multiple OpenGL installed on a server.
I noticed that even after specifying link path for OpenGL libraries at runtime in Makefile, when running the application it still looks for library in different places, resulting error.
The correct openGL path is /usr/lib/nvidia-410/
yuqiong#sturfee-dnn:~/sturgRender/assets$ ls /usr/lib/nvidia-410/ | grep GL
libEGL_nvidia.so.0
libEGL_nvidia.so.410.129
libEGL.so
libEGL.so.1
libEGL.so.410.129
libGLdispatch.so.0
libGL.so
libGL.so.1
libGL.so.410.129
libGLX_indirect.so.0
libGLX_nvidia.so.0
libGLX_nvidia.so.410.129
libGLX.so
libGLX.so.0
libOpenGL.so
libOpenGL.so.0
However the LD_LIBRARY_PATH points to :
yuqiong#sturfee-dnn:~/sturgRender/assets$ echo $LD_LIBRARY_PATH
/usr/local/torch/lib:/usr/local/tensorrt/lib:/usr/local/caffe/lib/:/usr/local/lib;//usr/local/cuda/lib64:/home/yuqiong/TensorRT-7.0.0.11/lib
This will cause the application to result in eglDisplayError. However after changing LD_LIBRARY_PATH to /usr/lib/nvidia-410/, this error is gone.
I suspect this is because libEGL and libGLX and libOpenGL is dynamically loaded.
However, on another machine, I build the application using CMake, and even though LD_LIBRARY_PATH is empty the application still links the correct libraries.
Why do I need to specify LD_LIBRARY_PATH on one machine but not the other?
Is the information about where to load dynamic libraries stored in system variables like LD_LIBRARY_PATH, or in the application itself?
You need to understand what is rpath, what is library search path and the rules for searching libraries.
For rpath and library search path, please check this one:
What's the difference between -rpath and -L?
For the rules for searching libraries, please check this one:
https://unix.stackexchange.com/questions/22926/where-do-executables-look-for-shared-objects-at-runtime
The makefile based build apparently does not uses rpath therefore based on the search order, loader finds the libraries in some other folder and it causes the issue. The cmake based build system either uses rpath or library is installed in default folders that loader checks regardless of settings.
I do not want to repeat what is already explained in other answers. I am merely trying to direct you to correct path to read more and understand these settings, then it will be obvious why you experience the issue and how to solve it.
For various reasons mostly to do with inertia, we don't have a make install target.
Rather, we build our large C++ codebase directly into an FHS-like tree;
output/
bin/
lib/
etc/
...
We've recently switched some third-party libraries to dynamic linking, and so we push a number of .so libraries into lib/.
Now, we're used to being able to just launch our executables from bin/, but that no longer works because the loader doesn't search our lib/ directory.
LD_LIBRARY_PATH would solve this, but we would prefer not to have to provide it before every single executable invocation, and we don't want to stick it in the shell's environment, because we typically switch between a number of different build trees in the same shell.
We've considered adding an rpath entry in the generated ELF, but relative paths are typically resolved against $PWD, not the executable's dirname.
Is there a way to nudge the loader to look in dirname(argv[0])/../lib for .so libs?
Basically, I understand that there are lots of ways we can change our habits to make this work (and probably should), but we prefer not to at this point, so can we coerce the Linux so loader to do what we want? Thanks!
Yes, it is possible using rpath and ${ORIGIN} macro, which is recognized by ld.so at runtime.
From man ld.so:
ld.so understands certain strings in an rpath specification
(DT_RPATH or DT_RUNPATH); those strings are substituted as follows
$ORIGIN (or equivalently ${ORIGIN})
This expands to the directory containing the application executable.
More variables are available. You don't need to coerce the loader to anything. It has the feature for you. :)
I feel somewhat ridiculous, but I'm trying to import the OpenBLAS libraries into a project. They were built with gfortran as the Fortran compiler. My early builds had no issue just pulling libopenblas.so in, but on another system, it's choking on libgfortran.so when I try to run our program, which doesn't exist there. My impression has been that this is a standard library on most, if not all, Linux systems. I could probably add a copy of libgfortran.so to Artifactory and let Apache Ivy pull it in, but it seems like it would make more sense to use the standard version if possible. Is there a good way to pull it in via Ivy when doing an ant resolve command if it doesn't exist on the system?
An alternate solution may be to statically link libgfortran.a in on the compiling system, but my attempts to do so by adding -static RELATIVE_PATH_TO_LIBS/libgfortran.a compile and link fine, but I still get errors when running said program on the system which lacks the library.
Thank you for whatever help you can provide.
If the executable file format is the "ELF" file format (default on Linux systems) you can use "readelf" to display the dynamic section of the executable:
readelf -d my_executable_file
It should contain a list of all shared libraries required. This is a possibility to check if the executable still requires this library.
If "libgfortran.so" is the problem and "libgfortran.a" is available I would rename "libgfortran.a" to "libxxxx.a" and use the linker switches:
-Lpath_containing_libxxxx.a -lxxxx
instead of "-lgfortran". I would not use the "-static" switch because in this case the linker also tries to link all the other libraries statically. The linker should automatically link "-lxxxx" statically because no dynamic library with this name is available.
I am writing a small application in c++ and I have some questions regarding that. I am basically a Java developer now moving into c++.
If I use some library like boost, curl etc. can I make it run without installing that on the client machine (I mean something like including all library jar files inside the project in Java)
I have installed some library or software in linux. After that if I type in the terminal it pings the software. For example php, after you install it you can use php from terminal. How does this work? Can I use my simple c++ project to do so?
Yes. You use a process called static linking, which links all the libraries into one big executable. In ./configure scripts (from autotools), you use the --enable-static flag. When building your program, you use the -static flag. The static libraries are the ones with the .a suffixes; shared libraries use .so, sometimes with a version number suffix).
PHP is not a library, it is a language (i.e. executable) which provides its own command-line interface. Your C++ executable can work similarly, you just have to get the input from cin (in <iostream>) and write results to cout, using cerr for error messages.
Your title question, "How to make a library in c++ in linux" (as opposed to using a library): You use the ar program to link several .o files into a single .a library file. You can also use ranlib to clean up the .a file. Read the man pages for those commands to see how they are used.
1)Answer to your Q1 is compilation with libraries statically linked. For example with gcc Compiler:
# gcc -static myfile.c -o myfile
2)Answer to you Q2 is appending the absolute path of executable to $PATH Environment Variable. For example in Bash shell:
# export PATH=${PATH}:/home/user/pathofexecutable
The above setup will be temporary only for that terminal you do. To make it available to all terminal in you machine add the above export command to /home/user/.bashrc file.
For question 1, you want to compile the program as a static executable. (Just pass -static to g++.) It will make the program much larger since it needs to include a copy of stuff normally kept as libraries.
For question 2 I'm pretty sure what you mean is having a program in the PATH. Type echo $PATH to see the path on your current machine. If you install your program in one of those directories it will run from anywhere. (Most likely /usr/local/bin/)
I have a c++ shared library which as part of its normal behaviour fork()/execs() another executable containing some unstable legacy code. This executable is not useful other than with this library, so I'd like to avoid placing it in a PATH directory. I'd also like to be able to install multiple copies in various locations, so hard coded paths are not desirable. Is there anything equivalent to a RPATH that will allow exec() to find this executable? Alternatively, is it possible to query the rpath of a shared library from the library itself?
Edit: This post suggests the latter is possible. I'll leave this open in case anybody knows the answer to the asked question. Is there a way to inspect the current rpath on Linux?
You can always use getenv to get the environment within the shared object, but is RPATH really what you want to use for that? Wouldn't it be better to have the shared object have some sort of configuration file in the user's home directory (or custom environment variable) that tells it which location to use run the external binary?
I think the best way to do this is to set an environment variable and use execve() to run the binary. Presumably you could just set PATH and then execve() a shell that would use PATH to find a copy of the executable. The library equivalent would be to set LD_LIBRARY_PATH and execve() a binary that has this library as a dependency.
In either case, you are not changing the external environment, only manufacturing a modified copy that is used with execve().