Setting environment variables for run-time in compile-time - c++

I have a C++ Vulkan program that needs multiples libraries to be available at run-time. Also, Vulkan has a feature called "Validation Layers" which is configured with a config file.
In run-time my program needs to know where those libraries are and where that config file is. I'm guessing there's no way of doing it programmatically, but if there is let me know. To workaround this I set environment variables, namely LD_LIBRARY_PATH (for it to find the libraries) and VK_LAYER_PATH (for it to find Vulkan's Validation Layer config file).
This works, but I want a better way to do this, because this doesn't allow me to simply double-click the file and run it. I must first set the env vars, which is bad if I'm deploying the program.
My question is: is there a compiler/linker option to do this?
This is the workaround I'm using in my makefile:
run:
LD_LIBRARY_PATH=./path/to/lib1/:./path/to/lib2 VK_LAYER_PATH=./path/to/vulkan/config ./program_name
I am using Linux, g++ and make.

If you know where the libraries you need to link against will be installed you could set an rpath. This will add the search path to the ELF header. When the dynamic linker runs it will search these locations in addition to the default locations.
Add to your compilation line -Wl,-rpath ./path/to/lib1/ to drop lib1 from the LD_LIBRARY_PRELOAD list. The -Wl is needed so the the compiler passes the flag onto the linker where it is actually recognized.
This blog seems to have a good description of all the different options

Related

How to prefer one library location vs. another one with Clang?

I have a system-wide libc++.so in /usr/lib64. I want to link my binary against another libc++.so which is located somewhere else, say, in $HOME/.local/lib. Also, I want to be able to find all other libraries the same way as before, assuming that $HOME/.local/lib contains only libc++.so.
I'm trying to do this like: clang++ -L$HOME/.local/lib -lc++, but the compiler still links against /usr/lib64/libc++.so.
How to force the compiler (or linker) to link against a specific library location?
-L adds the directory to the search path used by the linker. This has no effect on the search paths used at runtime. At runtime, the search paths, in order, are:
The environment variable LD_LIBRARY_PATH
rpath specified in the executable
System library path
While you can achieve what you want by specifying the environment variable LD_LIBRARY_PATH=$HOME/.local/lib, it is a bad solution since it modifies the search paths of all executables. Specifying the rpath is a much cleaner solution since it only affects the behaviour of your executable. You can do this through the linker option of your toolchain, which is likely -rpath. So the command would be clang++ -rpath $HOME/.local/lib -lc++.

How do I pull in unexpected build dependencies of standard libraries

I feel somewhat ridiculous, but I'm trying to import the OpenBLAS libraries into a project. They were built with gfortran as the Fortran compiler. My early builds had no issue just pulling libopenblas.so in, but on another system, it's choking on libgfortran.so when I try to run our program, which doesn't exist there. My impression has been that this is a standard library on most, if not all, Linux systems. I could probably add a copy of libgfortran.so to Artifactory and let Apache Ivy pull it in, but it seems like it would make more sense to use the standard version if possible. Is there a good way to pull it in via Ivy when doing an ant resolve command if it doesn't exist on the system?
An alternate solution may be to statically link libgfortran.a in on the compiling system, but my attempts to do so by adding -static RELATIVE_PATH_TO_LIBS/libgfortran.a compile and link fine, but I still get errors when running said program on the system which lacks the library.
Thank you for whatever help you can provide.
If the executable file format is the "ELF" file format (default on Linux systems) you can use "readelf" to display the dynamic section of the executable:
readelf -d my_executable_file
It should contain a list of all shared libraries required. This is a possibility to check if the executable still requires this library.
If "libgfortran.so" is the problem and "libgfortran.a" is available I would rename "libgfortran.a" to "libxxxx.a" and use the linker switches:
-Lpath_containing_libxxxx.a -lxxxx
instead of "-lgfortran". I would not use the "-static" switch because in this case the linker also tries to link all the other libraries statically. The linker should automatically link "-lxxxx" statically because no dynamic library with this name is available.

Issues with linking library C++

My problem is I am not able to include a library into my current project. [The way to include a library in netbeans into a project is to link it via linker to the project]. However, in my current project(which is written by another programmer who left the organization) the option of linker is not appearing. I have attached a screenshot. I am faced with the issue that the option of linking the library via linker to my current project is not appearing in IDE. Can someone please please help me out. I'll be highly thankful to you for the same.
Please guide me as to how should I link the library to my project. I have really spent a lot of days doing it but I did not succeed.
Assuming you are only interesting in libspatialindex:
Make sure you have the appropriate files installed: try a locate libspatialindex and see where it is installed. You could have a *.a, *.so or similar extension. Note the path.
Go into your project root directory, i.e: /home/keira/netbeans/projects/myproject
Try: gcc -i main.cpp -L/usr/lib/ -lspatialindex -o myfile
Replace the -L/usr/lib with the actual location where you know the library is at.
The cxx link flag is usually the name of the library with an -l prefix. If for example the name found in the system is libspatialindex.so then its a good bet to try with -lspatialindex.
There is a way to find the actual flags on Debian & Ubuntu systems but I cannot atm remember it. Alternatively you can always look on google or read the library documentation.
When you usually see linker errors with undefined functions, etc, it means you're not linking, provided you have included the headers and they are found.
Now for Netbeans, I assume there is the option of passing extra arguments to the compiler. In this case, all you need is the -lspatialindex flag, provided Netbeans knows where to find the library and the headers. That is how it works in KDevelop and other IDE's I have used.
Alternatively if you want more control and more automation, you may want use a tool like cmake.

How to change the library include path of a binary from bash?

I have a software properly installed on Kubuntu.
Now, I am patching and testing some of its libraries.
How can I start the software from bash so that it loads my patched libraries instead of the official libs?
e.g.:
the official libs are locate in /usr/lib/
my patch libraries (used during test development) are in /home/user/dev/lib/
I tried:
$ set LD_LIBRARY_PATH=/home/user/dev/lib/
$ binary_app &
but to no avail.
I'd prefer a solution that can be set from the bash, but if it's not possible, I could also modify the cmake file of this C++ software.
The aim is to allow me to easily start the application either with the vanilla libs, or with my patched libs to see the differences.
Edit: it's a KDE .so file
The library I am testing is a KDE4 library. The official lib is in /usr/lib/kde4/ . In that directory, none of the library start with the lib prefix.
Whether I do:
/lib/ld-linux-x86-64.so.2 --list --library-path PATH EXEC
or
ldd EXEC
The library is not listed at all.
On the other hand, if if move the original library away from /usr/lib/kde4/, the application starts but the corresponding functionality is missing.
Are KDE4 libraries loaded in a specific way? Maybe the variable to set is different...
Edit 2
All the answers are good and useful... unfortunately, it turned out that the problem does not appear to be related to the lib path setting. I'm dealing with a plugin architecture and the .so loading path appears to be hard-coded somewhere in the application. I need to spend more time within the source code to understand what's happening... Thanks and +1 to all.
From 'man bash':
When a simple command other than a builtin or shell function is to
be executed, it is invoked in a
separate execution environment that
consists of the following. Unless
otherwise noted, the values are
inherited from the shell.
[....]
ยท shell variables and functions marked for export, along
with variables exported for the
command, passed in the environment
You need to 'export' a variable if it is to be seen by programs you execute.
However, you can also try the following:
/lib/ld-linux.so.2 --library-path PATH EXECUTABLE
See http://tldp.org/HOWTO/Program-Library-HOWTO/shared-libraries.html
Try export LD_LIBRARY_PATH=... instead of set.
I already put this in a comment but after thinking about it I think the best way to do this (using a different library just for testing/debugging) is using LD_PRELOAD, see What is the LD_PRELOAD trick?
From the man page:
LD_PRELOAD
A whitespace-separated list of additional, user-specified, ELF shared libraries to be loaded before all others. This can be used to selectively override functions in other shared libraries. For set-user-ID/set-group-ID ELF binaries, only libraries in the standard search directories that are also set-user-ID will be loaded.
Update:
After the updated question it seems the application is using dlopen to open the library using a absolute path. I don't think you can do anything about it. See man dlopen
Update2:
Maybe there is something you can do: you might be able to LD_PRELOAD your own dlopen function which modifies the path to your own library...
Isn't you app setuid or setgid by chance? In this case LD_LIBRARY_PATH will be ignored.
Put everything on one line:
LD_LIBRARY_PATH=foo binary_app&

Compile the Python interpreter statically?

I'm building a special-purpose embedded Python interpreter and want to avoid having dependencies on dynamic libraries so I want to compile the interpreter with static libraries instead (e.g. libc.a not libc.so).
I would also like to statically link all dynamic libraries that are part of the Python standard library. I know this can be done using Freeze.py, but is there an alternative so that it can be done in one step?
I found this (mainly concerning static compilation of Python modules):
http://bytes.com/groups/python/23235-build-static-python-executable-linux
Which describes a file used for configuration located here:
<Python_Source>/Modules/Setup
If this file isn't present, it can be created by copying:
<Python_Source>/Modules/Setup.dist
The Setup file has tons of documentation in it and the README included with the source offers lots of good compilation information as well.
I haven't tried compiling yet, but I think with these resources, I should be successful when I try. I will post my results as a comment here.
Update
To get a pure-static python executable, you must also configure as follows:
./configure LDFLAGS="-static -static-libgcc" CPPFLAGS="-static"
Once you build with these flags enabled, you will likely get lots of warnings about "renaming because library isn't present". This means that you have not configured Modules/Setup correctly and need to:
a) add a single line (near the top) like this:
*static*
(that's asterisk/star the word "static" and asterisk with no spaces)
b) uncomment all modules that you want to be available statically (such as math, array, etc...)
You may also need to add specific linker flags (as mentioned in the link I posted above). My experience so far has been that the libraries are working without modification.
It may also be helpful to run make with as follows:
make 2>&1 | grep 'renaming'
This will show all modules that are failing to compile due to being statically linked.
CPython CMake Buildsystem offers an alternative way to build Python, using CMake.
It can build python lib statically, and include in that lib all the modules you want. Just set CMake's options
BUILD_SHARED OFF
BUILD_STATIC ON
and set the BUILTIN_<extension> you want to ON.
Using freeze doesn't prevent doing it all in one run (no matter what approach you use, you will need multiple build steps - e.g. many compiler invocations). First, you edit Modules/Setup to include all extension modules that you want. Next, you build Python, getting libpythonxy.a. Then, you run freeze, getting a number of C files and a config.c. You compile these as well, and integrate them into libpythonxy.a (or create a separate library).
You do all this once, for each architecture and Python version you want to integrate. When building your application, you only link with libpythonxy.a, and the library that freeze has produced.
You can try with ELF STATIFIER. I've been used it before and it works fairly well. I just had problems with it in a couple of cases and then I had to use another similar program called Ermine. Unfortunately this one is a commercial program.