I've got a project currently using CMake, which I would like to switch over to Bazel. The primary dependency is LLVM, which I use to generate LLVM IR. Looking around, there doesn't seem to be a whole lot of guidance on this as only TensorFlow seems to use LLVM from Bazel (and auto-generates its config as far as I can tell). There was also a thread on bazel-discuss I found which discussed a similar issue, though my attempts to replicate it have failed.
Currently, my best run has got to be this (fetcher.bzl):
def _impl(ctx):
# Download LLVM master
ctx.download_and_extract(url = "https://github.com/llvm-mirror/llvm/archive/master.zip")
# Run `cmake llvm-master` to generate configuration.
ctx.execute(["cmake", "llvm-master"])
# The bazel-discuss thread says to delete llvm-master, but I've
# found that only generated files are pulled out of master, so all
# the non-generated ones get dropped if I delete this.
# ctx.execute(["rm", "-r", "llvm-master"])
# Generate a BUILD file for the LLVM dependency.
ctx.file('BUILD', """
# Build a library with all the LLVM code in it.
cc_library(
name = "lib",
srcs = glob(["**/*.cpp"]),
hdrs = glob(["**/*.h"]),
# Include the x86 target and all include files.
# Add those under llvm-master/... as well because only built files
# seem to appear under include/...
copts = [
"-Ilib/Target/X86",
"-Iinclude",
"-Illvm-master/lib/Target/X86",
"-Illvm-master/include",
],
# Include here as well, not sure whether this or copts is
# actually doing the work.
includes = [
"include",
"llvm-master/include",
],
visibility = ["//visibility:public"],
# Currently picking up some gtest targets, I have that dependency
# already, so just link it here until I filter those out.
deps = [
"#gtest//:gtest_main",
],
)
""")
# Generate an empty workspace file
ctx.file('WORKSPACE', '')
get_llvm = repository_rule(implementation = _impl)
And then my WORKSPACE file looks like the following:
load(":fetcher.bzl", "get_llvm")
git_repository(
name = "gflags",
commit = "46f73f88b18aee341538c0dfc22b1710a6abedef", # 2.2.1
remote = "https://github.com/gflags/gflags.git",
)
new_http_archive(
name = "gtest",
url = "https://github.com/google/googletest/archive/release-1.8.0.zip",
sha256 = "f3ed3b58511efd272eb074a3a6d6fb79d7c2e6a0e374323d1e6bcbcc1ef141bf",
build_file = "gtest.BUILD",
strip_prefix = "googletest-release-1.8.0",
)
get_llvm(name = "llvm")
I would then run this with bazel build #llvm//:lib --verbose_failures.
I would consistently get errors from missing header files. Eventually I found that running cmake llvm-master generated many header files into the current directory, but seemed to leave the non-generated ones in llvm-master/. I added the same include directories under llvm-master/ and that seems to catch a lot of the files. However, currently it seems that tblgen is not running and I am still missing critical headers required for the compilation. My current error is:
In file included from external/llvm/llvm-master/include/llvm/CodeGen/MachineOperand.h:18:0,
from external/llvm/llvm-master/include/llvm/CodeGen/MachineInstr.h:24,
from external/llvm/llvm-master/include/llvm/CodeGen/MachineBasicBlock.h:22,
from external/llvm/llvm-master/include/llvm/CodeGen/GlobalISel/MachineIRBuilder.h:20,
from external/llvm/llvm-master/include/llvm/CodeGen/GlobalISel/ConstantFoldingMIRBuilder.h:13,
from external/llvm/llvm-master/unittests/CodeGen/GlobalISel/PatternMatchTest.cpp:10:
external/llvm/llvm-master/include/llvm/IR/Intrinsics.h:42:38: fatal error: llvm/IR/IntrinsicEnums.inc: No such file or directory
Attempting to find this file in particular, I don't see any IntrinsicEnums.inc, IntrinsicEnums.h, or IntrinsicEnums.dt. I do see a lot of Instrinsics*.td, so maybe one of them generates this particular file?
It seems like tblgen is supposed to convert the *.td files to *.h and *.cpp files (please correct me if I am misunderstanding). However, this doesn't seem to be running. I saw that in Tensorflow's project, they have a gentbl() BUILD macro, though it is not practical for me to copy it as it has way too many dependencies on the rest of Tensorflow's build infrastructure.
Is there any way to do this without something as big and complex as Tensorflow's system?
I posted to the llvm-dev mailing list here and got some interesting responses. LLVM definitely wasn't designed to support Bazel and doesn't do so particularly well. It appears to be theoretically possible by using Ninja to output all the compile commands and then consume them from Bazel. This is likely to be pretty difficult and would require a separate tool which outputs Skylark code to be run by Bazel.
This seemed pretty complex for the scale of project I was working on, so my workaround was to download the pre-built binaries from releases.llvm.org. This included all the necessary headers, libraries, and tooling binaries. I was able to make a simple but powerful toolchain based around this in Bazel for my custom programming language.
Simple example (limited but focused): https://github.com/dgp1130/llvm-bazel-foolang
Full example (more complex and less focused): https://github.com/dgp1130/sanity-lang
Related
In the code base I am working with we use the oracle instant client library as a third party dependency in Bazel as follows:
cc_library(
name = "instant_client_basiclite",
srcs = glob(["*.so*"]),
visibility = ["//visibility:public"],
)
The library looks as this:
$ bazel query 'deps(#instant_client_basiclite//:instant_client_basiclite)'
#instant_client_basiclite//:instant_client_basiclite
#instant_client_basiclite//:liboramysql.so
#instant_client_basiclite//:libociicus.so
#instant_client_basiclite//:libocci.so.21.1
#instant_client_basiclite//:libocci.so
#instant_client_basiclite//:libnnz21.so
#instant_client_basiclite//:libclntshcore.so
...
It works as far as linking is concerned, but it seems that the path to the library is still needed because otherwise I get a run time error (oracle error 1804). The error can be solved by setting any of the environment variables ORACLE_HOME or LD_LIBRARY_PATH. In fact for the IBM WebSphere MQ there is the same need (character encoding table files need to be found).
ldd on a binary points to .../bazel-bin/app/../../../_solib_k8/_U#instant_Uclient_Ubasiclite_S_S_Cinstant_Uclient_Ubasiclite___U/libocci.so.21.1
How can I set those needed path variables so that bazel test, bazel run and Bazel container image rules work?
One possibility is to add the following command line option:
--test_env=ORACLE_HOME="$(bazel info output_base)/external/instant_client_basiclite"
It is a pity that it cannot be put in .bazelrc.
I was trying to use yaml-cpp in my project. It took me half an hour to correctly link the library by experimenting with the following names. After I finally stumbled across them in this file, I settled for this:
find_package(yaml-cpp REQUIRED)
include_directories(${YAML_INCLUDE_DIRS})
target_link_libraries(${YAML_CPP_LIBRARIES})
It works, but the way I was searching for those seems brainless.
How is it remotely possible to figure out the correct name of the include variables? It could be YAML_LIBS, YAML_LIBRARY, YAML_CPP_LIBRARIES, there is no standard, right? What is the appropriate way to determine the correct cmake config for most c++ libraries?
Thank you.
Most of FindXXX.cmake scripts have usage description at the top of them (as CMake comments started #). The same is true about XXXConfig.cmake (or xxx-config.cmake) scripts.
Command find_package(XXX) uses one of such scripts (the one which actually exists). So, before using this approach for discover the package, make sure that you have read the description "embedded" into such script.
In your case, yaml-cpp-config.cmake file (created in the build or in the install directory) contains following description:
# - Config file for the yaml-cpp package
# It defines the following variables
# YAML_CPP_INCLUDE_DIR - include directory
# YAML_CPP_LIBRARIES - libraries to link against
so proper usage of results of find_package(yaml-cpp) is
include_directories(${YAML_CPP_INCLUDE_DIRS})
target_link_libraries(<your-target> ${YAML_CPP_LIBRARIES})
Summary
I am currently trying to build a header only C++ (17) library for iOS. This library should output a .framework which should be consumable in a standard iOS application. The library also (of course) has other dependencies which do have source files.
Background
The existing setup uses Bazel, and up until this point has been using Tulsi for integration, but now this needs to change to versioned library releases (unless Tulsi can generate an iOS .framework complete with header files which I have not seen yet - please educate me if it exists).
Using initially ios_framework from the build_bazel_rules_apple, until it was made clear in the documentation that this should only be used within a Bazel ios target (i.e. using the ios_application) and that this can't then be taken out and used separately within another project.
I then turned towards ios_static_framework (also seen in build_bazel_rules_apple) which seems to be the correct thing, however I can never get it to maintain the header structure.
Currently
Bazel
Using ios_static_framework requires (apparently) first wrapping the normal cc_library with an objc_library, which can then be put in the deps of the ios_static_framework i.e.
# BUILD example (assume all rules are loaded and WORKSPACE is setup correctly)
cc_library(
name = "somelib",
deps = [
"//otherlib0", # A sub part of the lib I am trying to build
"//otherlib1" # Another sub part of the lib I am trying to build
],
hdrs = glob(["**/*.h"]),
visibility=["//visibility:public"]
)
objc_library(
name = "somelibwrapperobjc",
deps = [
"//:somelib"
],
hdrs = glob(["**/*.h"]),
visibility=["//visibility:public"]
)
ios_static_framework(
name = "somelibidealoutput",
families = ["iphone"],
hdrs = glob(["**/*.h"]),
umbrella_header = "libheader.h",
deps = [
"//:somelibwrapperobjc"
]
)
It appears to also build without the cc_library step to the same affect and the output is a directory with a somelibidealoutput.framework and the header file libheader.h but everything else is gone.
So this ^^ doesn't work.
CMake
So then I tried with CMake from scratch.
cmake_minimum_required(VERSION 3.1)
project(myproject C CXX)
include_directories(${PROJECT_SOURCE_DIR}/projectincludesadditional)
set(HEADER_FILES
myheader1.hpp
onelayer/myheader2.hpp
additional/header/myheader3.hpp
...
)
I then attempting the next part in two different ways. Since CMake header only libraries are simple I tried:
add_library(mylibrary INTERFACE)
target_include_directories(mylibrary INTERFACE ${HEADER_FILES})
Which could then be linked to the actual library, as well as the other alternative:
add_library(myconsuminglibrary SHARED
${HEADER_FILES})
The framework can then be built using the following:
set_target_properties(myconsuminglibrary PROPERTIES
FRAMEWORK TRUE
FRAMEWORK_VERSION CXX
LINKER_LANGUAGE CXX
MACOSX_FRAMEWORK_IDENTIFIER com.something.idontcare
MACOSX_FRAMEWORK_INFO_PLIST info.plist
VERSION 16.4.0
SOVERSION 1.0.0
PUBLIC_HEADER "myuselessumbrellaheaderthatdoesntpullanyotherheadersin.hpp"
XCODE_ATTRIBUTE_CODE_SIGN_IDENTITY "iPhone Developer"
)
But the headers don't copy across! only the specified public umbrella header does (and even if there's an include in there it won't copy that across since it's not lexing the file.
I followed a possible solution here (on cmake) in the suggested comments but it seems to just copy one layer deep and again removes the file structure.
Should I just be using a simple install and assuming thats the best way for these headers to then be used in a framework? Essentially copy them and hope for the best?
Xcode
Finally, I tried to make a traditional xcode framework starting from scratch in xcode.
I managed to get everything to build as expected (with minimal changes), but this required path changes to the xcode project etc.
However, it flattens the hierarchy of the headers and therefore all relative header paths once again become useless in the consuming application/library.
Question
Can a header only C++ library be compiled into a dynamic/static library for iOS?
If so, which build system & which config changes are needed to my CMake/Bazel build system?
Any other suggestions??
For many reasons, I prefer Boost.UTF to gtest (or other alternatives).
I recently decided to use Bazel as my build system, and since I'm essentially at tutorial level, I looked online for a way to use Boost in Bazel, but none of them seems to handle for Boost.UTF, and since this library is not header only (like the ones handled in https://github.com/nelhage/rules_boost), I am not sure how to proceed.
How can I add Boost.UTF to Bazel, so I can use it for my test modules?
Any hint is welcome, thanks.
P.S.
The only way to work around the issue I see is to try to install boost on the machine I build with and try to have Bazel use that. I guess that is how it deals with the standard libs anyway.
EDIT:
This is the code of my unit test.
btest.cpp
#define BOOST_TEST_MODULE CompactStateTest
#include <boost/test/included/unit_test.hpp>
BOOST_AUTO_TEST_SUITE(Suite1)
BOOST_AUTO_TEST_CASE(Test1)
{
int x(0);
BOOST_CHECK_EQUAL(x, 0);
}
BOOST_AUTO_TEST_SUITE_END()
BUILD (the "Makefile" for bazel)
cc_test(
name = "btest",
srcs = ["btest.cpp",],
deps = ["#boost//:test",],
)
From bazel's (and cc_test's) point of view, a test is a binary that returns non-zero exit code when it fails, possibly (not obligatory) writing an xml file specified by XML_OUTPUT_FILE env var set at the test execution time with the xml test results.
So your goal is to write cc_test rule with all the deps set, so bazel can compile and run it. For that you will need to add a dependency to cc_library for Boost.UTF. This will be a standard bazel cc_library with hdrs and srcs (and/or deps).
I'm anticipating your next question on how to depend files installed on your local system, for that you will find local_repository (and it's new_ variant) useful.
I'm currently working to upgrade a set of c++ binaries that each use their own set of Makefiles to something more modern based off of Autotools. However I can't figure out how to include a third party library (eg. the Oracle Instant Client) into the build/packaging process.
Is this something really simple that I've missed?
Edit to add more detail
My current build environment looks like the following:
/src
/lib
/libfoo
... source and header files
Makefile
/oci #Oracle Instant Client
... header and shared libraries
Makefile
/bin
/bar
... source and header files
Makefile
Makefile
/build
/bin
/lib
build.sh
Today the top level build.sh does the following steps:
Runs each lib's Makefile and copies the output to /build/lib
Runs each binary's Makefile and copied the output to /build/bin
Each Makefile has a set of hardcoded paths to the various sibling directories. Needless to say this has become a nightmare to maintain. I have started testing out autotools but where I am stuck is figuring out the equivalent to copying /src/lib/oci/*.so to /build/lib for compile time linking and bundling into a distribution.
I figured out how to make this happen.
First I switched to a non recursive make.
Next I made the following changes to configure.am as per this page http://www.openismus.com/documents/linux/using_libraries/using_libraries
AC_ARG_WITH([oci-include-path],
[AS_HELP_STRING([--with-oci-include-path],
[location of the oci headers, defaults to lib/oci])],
[OCI_CFLAGS="-$withval"],
[OCI_CFLAGS="-Ilib/oci"])
AC_SUBST([OCI_CFLAGS])
AC_ARG_WITH([oci-lib-path],
[AS_HELP_STRING([--with-oci-lib-path],
[location of the oci libraries, defaults to lib/oci])],
[OCI_LIBS="-L$withval -lclntsh -lnnz11"],
[OCI_LIBS='-L./lib/oci -lclntsh -lnnz11'])
AC_SUBST([OCI_LIBS])
In the Makefile.am you then use the following lines (assuming a binary named foo)
foo_CPPFLAGS = $(OCI_CFLAGS)
foo_LDADD = libnavycommon.la $(OCI_LIBS)
ocidir = $(libdir)
oci_DATA = lib/oci/libclntsh.so.11.1 \
lib/oci/libnnz11.so \
lib/oci/libocci.so.11.1 \
lib/oci/libociicus.so \
lib/oci/libocijdbc11.so
The autotools are not a package management system, and attempting to put that type of functionality in is a bad idea. Rather than incorporating the third party library into your distribution, you should simply have the configure script check for its existence and abort if the required library is not available. The onus is on the user to satisfy the dependency. You can then release a binary package that will allow the user to use the package management system to simplify dependency resolution.