When I specify build rules in bazel, my dependencies are either full paths (from the root of the repo), or just the target name (since its in the same directory):
cc_binary(
name = "program",
srcs = ["main.cpp"],
deps = ["//a/full/path/to/the/library:lib",
"foo"]
)
Assume I'm writing a build rule from directory "the".
I was hoping to do something like this:
cc_binary(
name = "program",
srcs = ["main.cpp"],
deps = ["library:lib",
"foo"]
)
This does not seem to be possible. Is there some kind of way, where I can specify the target deeper starting from the location of the BUILD file?
You cannot.
Relative labels cannot be used to refer to targets in other packages;
the repository identifier and package name must always be specified in this case.
From Bazel labels documentation
Related
I would like to package lets say a binary with the pkg_tar command. But I would also automatically like it to include all deps to that binary, for example all .so files from other Bezel targets that are referenced with deps. Is it possible?
pkg_tar(
name = "example",
srcs = ["//myprogram"], # This only packages myprogram target
mode = "0644",
)
Currently, this feature isn't officially supported. You have three basic options:
explicitly enumrate all deps
use one of "hacks" from https://github.com/bazelbuild/bazel/issues/1920
use undocumented include_runfiles = True feature https://github.com/bazelbuild/rules_pkg/issues/145
Setting the (undocumented) include_runfiles = True will include the shared object and any other runfiles of all the transitive dependencies.
I am migrating large legacy makefiles project to Bazel. Project used to copy all sources and headers into single "build dir" before build, and because of this all source and header files use single level includes, without any prefix (#include "1.hpp").
Bazel requires that modules (libraries) use relative path to header starting at WORKSPACE file, however my goal is to introduce Bazel build files, which require 0 modifications of a source code.
I use bazelrc to globally set paths to includes as if structure was flat:
.bazelrc:
build --copt=-Ia/b/c
/a/b/BUILD
cc_library(
name = "lib",
srcs = ["c/1.cpp"],
hdrs = ["c/1.hpp"],
visibility = ["//visibility:public"]
)
When I build this target, I see my -I flag in compiler invocation, but compilation fails because bazel can not find header 1.hpp:
$ bazel build -s //a/b:lib
...
a/b/c/1.cpp:13:10: fatal error: 1.hpp: No such file or directory
13 | #include "1.hpp"
|
Interestingly enough, it prints me gcc command that it invokes during build and if I run this command, compiler is able to find 1.hpp and 1.cpp compiles.
How to make bazel "see" this includes? Do I really need to additionally specify copts for every target in addition to global -I flags?
Bazel use sandboxing: for each action (compile a C++ file, link a library) the specific build directory is prepared. That directory contains only files (using symlinks and other Linux sorcery), which are explicitly defined as dependency/source/header for given target.
That trick with --copt=-Ia/b/c is a bad idea, because that option will work only for targets, which depend on //a/b:lib.
Use includes or strip_include_prefix attribute instead:
cc_library(
name = "lib",
srcs = ["c/1.cpp"],
hdrs = ["c/1.hpp"],
strip_include_prefix = "c",
visibility = ["//visibility:public"]
)
and add the lib as a dependency of every target, which need to access these headers:
cc_binary(
name = "some bin",
srcs = ["foo.cpp"],
deps = ["//a/b:lib"],
)
I would just like to know if someone has tried doing this?
I am currently using nelhage/rules_boost for my boost dependencies(just to make some things compile for the meantime), but since the code I'm working with is only 100% compatible with 1.55 I cannot use his rules for long.
I could also try adapting his code to work with boost 1.55, but I think it would make it a lot easier if I just make Bazel depend on an installation of boost since I am also working with containers.
I usually use boost as pre-built external dependency with Bazel. I just reference the local installation in my WORKSPACE file and then create a BUILD file for it, e.g.:
# WORKSPACE file
new_local_repository(
name = "boost",
path = "/your/path/to/boost",
build_file = "third_party/boost.BUILD",
)
In the BUILD file you can choose to split headers and libs into separate rules or combine them together. In the following example I keep all the headers as a rule and separate libraries into different rules:
# third_party/boost.BUILD
cc_library(
name = "boost-headers",
hdrs = glob(["include/boost/**"]),
visibility = ["//visibility:public"],
includes = ['include'],
)
cc_library(
name = "boost-atomic",
srcs = ["lib/libboost_atomic.a"],
visibility = ["//visibility:public"],
)
cc_library(
name = "boost-chrono",
srcs = ["lib/libboost_chrono.a"],
visibility = ["//visibility:public"],
)
...
Then in my binary/library I pick-up the dependencies:
cc_binary(
name = 'main',
srcs = ['main.cc'],
deps = [
'#boost//:boost-headers',
'#boost//:boost-regex',
]
)
This should also work is you have boost installed into /usr/include / /usr/lib, but I haven't tried to be honest.
Hope this helps.
I'm building a program with gRPC library using bazel. My WORKSPACE file:
http_archive(
name = "com_github_grpc_grpc",
urls = ["https://github.com/grpc/grpc/archive/v1.8.3.zip"],
sha256 = "57a2c67abe789ce9e80d49f473515c7479ae494e87dba84463b10bbd0990ad62",
strip_prefix = "grpc-1.8.3",
)
load("#com_github_grpc_grpc//bazel:grpc_deps.bzl", "grpc_deps")
grpc_deps()
BUILD file:
proto_library(
name = "test_proto",
srcs = ["test.proto"],
)
cc_proto_library(
name = "test_cc_proto",
deps = [":test_proto"],
)
cc_binary(
name = "hello",
srcs = ["hello.cc"],
deps = [
":test_cc_proto",
"#com_github_grpc_grpc//:grpc++",
],
)
Compiling this throws error:
every rule of type proto_library implicitly depends upon the target '#com_google_protobuf_cc//:cc_toolchain', but this target could not be found because of: no such package '#com_google_protobuf_cc//': The repository could not be resolved.
If I include com_google_protobuf_cc repository manually, the version doesn't match and I get error saying test.pb.h was generated using a newer version of protoc.
How do I make gRPC load right version of com_google_protobuf_cc?
How are you including Protobuf manually, what did you put in your BUILD and/or WORKSPACE file/s to achieve this? It's hard to comment on what could be wrong without knowing exactly what you have tried.
As far as I know you can include it by downloading the version you require then adding something like the following to your WORKSPACE file:
local_repository(
name = "com_google_protobuf",
path = "../protobuf-3.4.1",
)
local_repository(
name = "com_google_protobuf_cc",
path = "../protobuf-3.4.1",
)
Of course change the paths to match the version and location of your downloaded copy of Protobuf. Alternatively you can probably use http_archive to point it directly to where it should be downloaded, in the same way as you have done for gRPC.
I'm doing a shared library which uses Tensorflow. For now I placed it in Tensorflow's source tree as subproject with the following BUILD file:
cc_binary(
name = "recognizer.so",
srcs = glob(["recognizer.cpp"]),
linkshared = 1,
deps = [
"//tensorflow:сore"
],
)
Everything links together but I end up with a shared library about 94 megabytes in size and not depended on libtensorflow_cc.so. Actually there is even no such binary as libtensorflow_cc.so built.
There is a target //tensorflow:libtensorflow_cc.so . It is declared as cc_binary which means (according to bazel) I cannot depend on it. Moreover this target is actually non-public, which means that I can build it but not refer to it from another subproject. At least with bazel.
So, is there any way to do such a simple thing?
I cannot comment on why libtensorflow.so or libtensorflow_cc.so are :internal. But there is a trick you can do in Bazel to be able to depend on a shared library created by cc_binary: declare it as a source of a cc rule.
cc_binary(
name = "liba.so",
srcs = [ "a.cc" ],
linkshared = 1
)
cc_binary(
name = "main",
srcs = [ "main.cc", "liba.so" ],
)
Now this is highly unsupported :) In fact, we are going to change how we handle shared libraries in the following months, so I can almost promise you it will break. You can subscribe to https://github.com/bazelbuild/bazel/issues/1920 or follow bazel-dev# to be updated.