Provide run time environment variable path (e.g. LD_LIBRARY_PATH) to third party dependency in bazel - c++

In the code base I am working with we use the oracle instant client library as a third party dependency in Bazel as follows:
cc_library(
name = "instant_client_basiclite",
srcs = glob(["*.so*"]),
visibility = ["//visibility:public"],
)
The library looks as this:
$ bazel query 'deps(#instant_client_basiclite//:instant_client_basiclite)'
#instant_client_basiclite//:instant_client_basiclite
#instant_client_basiclite//:liboramysql.so
#instant_client_basiclite//:libociicus.so
#instant_client_basiclite//:libocci.so.21.1
#instant_client_basiclite//:libocci.so
#instant_client_basiclite//:libnnz21.so
#instant_client_basiclite//:libclntshcore.so
...
It works as far as linking is concerned, but it seems that the path to the library is still needed because otherwise I get a run time error (oracle error 1804). The error can be solved by setting any of the environment variables ORACLE_HOME or LD_LIBRARY_PATH. In fact for the IBM WebSphere MQ there is the same need (character encoding table files need to be found).
ldd on a binary points to .../bazel-bin/app/../../../_solib_k8/_U#instant_Uclient_Ubasiclite_S_S_Cinstant_Uclient_Ubasiclite___U/libocci.so.21.1
How can I set those needed path variables so that bazel test, bazel run and Bazel container image rules work?

One possibility is to add the following command line option:
--test_env=ORACLE_HOME="$(bazel info output_base)/external/instant_client_basiclite"
It is a pity that it cannot be put in .bazelrc.

Related

How to use a c++ library written in a cmake project into another cmake project [duplicate]

I was trying to use yaml-cpp in my project. It took me half an hour to correctly link the library by experimenting with the following names. After I finally stumbled across them in this file, I settled for this:
find_package(yaml-cpp REQUIRED)
include_directories(${YAML_INCLUDE_DIRS})
target_link_libraries(${YAML_CPP_LIBRARIES})
It works, but the way I was searching for those seems brainless.
How is it remotely possible to figure out the correct name of the include variables? It could be YAML_LIBS, YAML_LIBRARY, YAML_CPP_LIBRARIES, there is no standard, right? What is the appropriate way to determine the correct cmake config for most c++ libraries?
Thank you.
Most of FindXXX.cmake scripts have usage description at the top of them (as CMake comments started #). The same is true about XXXConfig.cmake (or xxx-config.cmake) scripts.
Command find_package(XXX) uses one of such scripts (the one which actually exists). So, before using this approach for discover the package, make sure that you have read the description "embedded" into such script.
In your case, yaml-cpp-config.cmake file (created in the build or in the install directory) contains following description:
# - Config file for the yaml-cpp package
# It defines the following variables
# YAML_CPP_INCLUDE_DIR - include directory
# YAML_CPP_LIBRARIES - libraries to link against
so proper usage of results of find_package(yaml-cpp) is
include_directories(${YAML_CPP_INCLUDE_DIRS})
target_link_libraries(<your-target> ${YAML_CPP_LIBRARIES})

Building LLVM with Bazel

I've got a project currently using CMake, which I would like to switch over to Bazel. The primary dependency is LLVM, which I use to generate LLVM IR. Looking around, there doesn't seem to be a whole lot of guidance on this as only TensorFlow seems to use LLVM from Bazel (and auto-generates its config as far as I can tell). There was also a thread on bazel-discuss I found which discussed a similar issue, though my attempts to replicate it have failed.
Currently, my best run has got to be this (fetcher.bzl):
def _impl(ctx):
# Download LLVM master
ctx.download_and_extract(url = "https://github.com/llvm-mirror/llvm/archive/master.zip")
# Run `cmake llvm-master` to generate configuration.
ctx.execute(["cmake", "llvm-master"])
# The bazel-discuss thread says to delete llvm-master, but I've
# found that only generated files are pulled out of master, so all
# the non-generated ones get dropped if I delete this.
# ctx.execute(["rm", "-r", "llvm-master"])
# Generate a BUILD file for the LLVM dependency.
ctx.file('BUILD', """
# Build a library with all the LLVM code in it.
cc_library(
name = "lib",
srcs = glob(["**/*.cpp"]),
hdrs = glob(["**/*.h"]),
# Include the x86 target and all include files.
# Add those under llvm-master/... as well because only built files
# seem to appear under include/...
copts = [
"-Ilib/Target/X86",
"-Iinclude",
"-Illvm-master/lib/Target/X86",
"-Illvm-master/include",
],
# Include here as well, not sure whether this or copts is
# actually doing the work.
includes = [
"include",
"llvm-master/include",
],
visibility = ["//visibility:public"],
# Currently picking up some gtest targets, I have that dependency
# already, so just link it here until I filter those out.
deps = [
"#gtest//:gtest_main",
],
)
""")
# Generate an empty workspace file
ctx.file('WORKSPACE', '')
get_llvm = repository_rule(implementation = _impl)
And then my WORKSPACE file looks like the following:
load(":fetcher.bzl", "get_llvm")
git_repository(
name = "gflags",
commit = "46f73f88b18aee341538c0dfc22b1710a6abedef", # 2.2.1
remote = "https://github.com/gflags/gflags.git",
)
new_http_archive(
name = "gtest",
url = "https://github.com/google/googletest/archive/release-1.8.0.zip",
sha256 = "f3ed3b58511efd272eb074a3a6d6fb79d7c2e6a0e374323d1e6bcbcc1ef141bf",
build_file = "gtest.BUILD",
strip_prefix = "googletest-release-1.8.0",
)
get_llvm(name = "llvm")
I would then run this with bazel build #llvm//:lib --verbose_failures.
I would consistently get errors from missing header files. Eventually I found that running cmake llvm-master generated many header files into the current directory, but seemed to leave the non-generated ones in llvm-master/. I added the same include directories under llvm-master/ and that seems to catch a lot of the files. However, currently it seems that tblgen is not running and I am still missing critical headers required for the compilation. My current error is:
In file included from external/llvm/llvm-master/include/llvm/CodeGen/MachineOperand.h:18:0,
from external/llvm/llvm-master/include/llvm/CodeGen/MachineInstr.h:24,
from external/llvm/llvm-master/include/llvm/CodeGen/MachineBasicBlock.h:22,
from external/llvm/llvm-master/include/llvm/CodeGen/GlobalISel/MachineIRBuilder.h:20,
from external/llvm/llvm-master/include/llvm/CodeGen/GlobalISel/ConstantFoldingMIRBuilder.h:13,
from external/llvm/llvm-master/unittests/CodeGen/GlobalISel/PatternMatchTest.cpp:10:
external/llvm/llvm-master/include/llvm/IR/Intrinsics.h:42:38: fatal error: llvm/IR/IntrinsicEnums.inc: No such file or directory
Attempting to find this file in particular, I don't see any IntrinsicEnums.inc, IntrinsicEnums.h, or IntrinsicEnums.dt. I do see a lot of Instrinsics*.td, so maybe one of them generates this particular file?
It seems like tblgen is supposed to convert the *.td files to *.h and *.cpp files (please correct me if I am misunderstanding). However, this doesn't seem to be running. I saw that in Tensorflow's project, they have a gentbl() BUILD macro, though it is not practical for me to copy it as it has way too many dependencies on the rest of Tensorflow's build infrastructure.
Is there any way to do this without something as big and complex as Tensorflow's system?
I posted to the llvm-dev mailing list here and got some interesting responses. LLVM definitely wasn't designed to support Bazel and doesn't do so particularly well. It appears to be theoretically possible by using Ninja to output all the compile commands and then consume them from Bazel. This is likely to be pretty difficult and would require a separate tool which outputs Skylark code to be run by Bazel.
This seemed pretty complex for the scale of project I was working on, so my workaround was to download the pre-built binaries from releases.llvm.org. This included all the necessary headers, libraries, and tooling binaries. I was able to make a simple but powerful toolchain based around this in Bazel for my custom programming language.
Simple example (limited but focused): https://github.com/dgp1130/llvm-bazel-foolang
Full example (more complex and less focused): https://github.com/dgp1130/sanity-lang

How to add depdency in waf builder's wscript script

In my project (which uses waf/wscript based build system), I am now adding mongodb c++ driver APIs. I figured out that 'libmongoclient.a' is not getting added as a linker option (at compile time) and I get all undefined reference to the mongodb c++ driver API calls.
I want to understand, how do I modify my wscript so that it picks up the mongoclient related library by itself and links it properly. It perhaps involves updating the configuration function of wscript. I am new to the waf build system, and not sure how to change it.
I have built and installed the mongodb c++ driver as follows:
- INCLUDE: /usr/local/include/mongo/
- LIB: /usr/local/lib/libmongoclient.a
I posted a similar question earlier in this regard, and the above one is more specific problem statement.
https://stackoverflow.com/questions/30020574/building-project-with-waf-script-and-eclipse
Since I am just invoking ./waf from within eclipse, I believe, the options that I specify into Eclipse's build environment are not being picked up by the waf (and hence the library option for mongoclient).
I figured this out and the steps are as follows:
Added following check in the configure command/function.
conf.check_cfg(package='libmongoclient', args=['--cflags', '--libs'],
uselib_store='MONGOCLIENT', mandatory=True)
After this step, we need to add a package configuration file (.pc) into /usr/local/lib/pkgconfig path. This is the file where we specify the paths to lib and headers. Pasting the content of this file below.
prefix=/usr/local
libdir=/usr/local/lib
includedir=/usr/local/include/mongo
Name: libmongoclient
Description: Mongodb C++ driver
Version: 0.2
Libs: -L${libdir} -lmongoclient
Cflags: -I${includedir}
Added the above library into the build function to the sepcific program which depends on the above dependency (i.e. MongoClient).
mobility = bld(
target='bin/mobility',
features='cxx cxxprogram',
source='src/main.cpp',
use='mob-objects MONGOCLIENT',
)

Compile proftpd and include a library copy inside the installation directory

I do already ask a quiet similar question but in fact I now change my mind.
Id like like to compile proftpd and add a copy of the library it uses to the choosen installation directory.
Let's say I define a prefix in my compilation like:
/usr/local/proftpd
Under this directory I would like to find and use those directories only :
./lib
./usr/lib
./usr/bin
./usr/.....
./etc
./var/log/proftpd
./bin
./sbin
./and others I will not put the whole list
So the idea is after I have all libraries and config file in my main directory I could tar it and send it on another server with the same OS and without installing all the dependencies of protfpd I could use it.
I know it does sound like a windows installer not using shared library but that's in fact exactly what I'm trying to accomplish.
So far I have manage to compile it on AIX using this command line:
./configure --with-modules=mod_tls:mod_sql:mod_sql_mysql:mod_sql_passwd:mod_sftp:mod_sftp_sql --without-getopt --enable-openssl --with-includes=/opt/freeware/include:/opt/freeware/include/mysql/mysql/:/home/poney2/src_proftpd/libmath_header/ --with-libraries=/opt/freeware/lib:/opt/freeware/lib/mysql/mysql/:/home/poney2/src_proftpd/libmath_lib --prefix=/home/poney/proftpd_bin --exec-prefix=/home/poney/proftpd_bin/proftpd
Before trying to ask me why I'm doing so, it's because I have to compile proftpd on IBM AIX with almost all modules and this is not available on the IBM rpm binary repositories.
The use of this LDFLAG
LDFLAGS="-Wl,-blibpath:/a/new/lib/path"
where /a/new/lib/path contains all your library does work with Xlc and Gcc compiler.

Autotools: Including a prebuilt 3rd party library

I'm currently working to upgrade a set of c++ binaries that each use their own set of Makefiles to something more modern based off of Autotools. However I can't figure out how to include a third party library (eg. the Oracle Instant Client) into the build/packaging process.
Is this something really simple that I've missed?
Edit to add more detail
My current build environment looks like the following:
/src
/lib
/libfoo
... source and header files
Makefile
/oci #Oracle Instant Client
... header and shared libraries
Makefile
/bin
/bar
... source and header files
Makefile
Makefile
/build
/bin
/lib
build.sh
Today the top level build.sh does the following steps:
Runs each lib's Makefile and copies the output to /build/lib
Runs each binary's Makefile and copied the output to /build/bin
Each Makefile has a set of hardcoded paths to the various sibling directories. Needless to say this has become a nightmare to maintain. I have started testing out autotools but where I am stuck is figuring out the equivalent to copying /src/lib/oci/*.so to /build/lib for compile time linking and bundling into a distribution.
I figured out how to make this happen.
First I switched to a non recursive make.
Next I made the following changes to configure.am as per this page http://www.openismus.com/documents/linux/using_libraries/using_libraries
AC_ARG_WITH([oci-include-path],
[AS_HELP_STRING([--with-oci-include-path],
[location of the oci headers, defaults to lib/oci])],
[OCI_CFLAGS="-$withval"],
[OCI_CFLAGS="-Ilib/oci"])
AC_SUBST([OCI_CFLAGS])
AC_ARG_WITH([oci-lib-path],
[AS_HELP_STRING([--with-oci-lib-path],
[location of the oci libraries, defaults to lib/oci])],
[OCI_LIBS="-L$withval -lclntsh -lnnz11"],
[OCI_LIBS='-L./lib/oci -lclntsh -lnnz11'])
AC_SUBST([OCI_LIBS])
In the Makefile.am you then use the following lines (assuming a binary named foo)
foo_CPPFLAGS = $(OCI_CFLAGS)
foo_LDADD = libnavycommon.la $(OCI_LIBS)
ocidir = $(libdir)
oci_DATA = lib/oci/libclntsh.so.11.1 \
lib/oci/libnnz11.so \
lib/oci/libocci.so.11.1 \
lib/oci/libociicus.so \
lib/oci/libocijdbc11.so
The autotools are not a package management system, and attempting to put that type of functionality in is a bad idea. Rather than incorporating the third party library into your distribution, you should simply have the configure script check for its existence and abort if the required library is not available. The onus is on the user to satisfy the dependency. You can then release a binary package that will allow the user to use the package management system to simplify dependency resolution.