I have a cc_library that is header-only. Whenever I try to compile such library by itself, it won't actually compile anything. I purposely put some errors to try to get such errors on compilation, but bazel doesn't actually compile anything. Here's a small example.
// test.h
This should not compile fdsafdsafdsa
int foo() { return 1; }
# BUILD
cc_library(
name = 'test',
hdrs = ['test.h']
)
// bazel build :test
INFO: Analyzed target //:test (2 packages loaded, 3 targets configured).
INFO: Found 1 target...
Target //:test up-to-date (nothing to build)
INFO: Elapsed time: 0.083s, Critical Path: 0.00s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action
Is this behavior intended?
I also ran the same experiment but splitting the .h and .cc files, and in that case, I got the error when I compiled.
cc_library (other rules as well incl. pkg_tar for instance) does not have to have any sources. This is also valid:
cc_library(
name = "empty",
srcs = [],
)
And it is actually quite useful too. You may have configurable attributes such as deps (or srcs) where actual content is only applicable for certain conditions:
cc_binary(
name = "mybinary",
srcs = ["main.c"],
deps = select({
":platform1": [":some_plat1_only_lib"],
":platform2": [":empty"], # as defined in the above snippet
}),
)
Or (since above you could have just as well used [] for :platform2 deps) where you have a larger tree and you expect developers to just depend on //somelib:somelib, you could use this empty library through an alias to give them a single label without having to worry about all the platform specific details and how providing certain function is handled where:
# somelib/BUILD:
alias(
name = "somelib",
actual = select({
":platform1": [":some_plat1_only_lib"],
":platform2": [":empty"], # as defined in the above snippet
}),
visibility = ["//visibility:public"], # set appropriately
)
And mybinary or any other target could now say:
cc_binary(
name = "mybinary",
srcs = ["main.c"],
deps = ["//somelib"],
)
And of course, as the other answer here states, there are header only libraries.
Also in the example you've used in your question. (bazel or not) you would normally not (and it would not be very useful either) compile a header file on its own. You would only use its content and only then see the compiler fails as you attempt to build a source the header is #included from. That is for bazel build to fail, another target would have to depend on test and #include "test.h".
A header only library means that you don't need to build anything. Simply include the headers you need in your program and use them.
Related
I have a project that I'm building using bazel. My application references another bazel repository as a dependency, in its BUILD file, let's call this #dep. I cannot make any changes to the code in #dep, but I need to override a C macro defined in one of the header files in #dep.
I thought about using the compiler option -D to define the symbol defined at the top of the header file I want to replace, which contains the C macro, and then using -include to include a different header file with my macro, for all files in #dep which are being compiled via bazel. But in cc_binary, there is no option for -include, and copt = [] will only work for the target being compiled and not for its dependencies.
I came across this post but unfortunately the solution was not posted - How to specify preprocessor includes in Bazel? (-include common_header.h)
I don’t get why they don’t provide a way to use -include on the compiler command line for each file compiled, via the cc_binary build rule. They have defines = [] after all.
In general bazel doesn't provide a lot of mechanisms like that because it can cause scaling issues in large repositories. E.g. say project1 and project2 depend on the same tree of 1000 cc_library targets. Building both of them together results in 1000 cc_library targets being built. But if cc_binary had a way to change the way its dependencies are compiled, and project2 did something like that, then building the two projects together now requires building 2000 cc_library targets, which doubles memory requirements, remote caching requirements, remote execution requirements, increases analysis time.
For the defines attribute, that affects things that depend on the target with defines, but doesn't affect the dependencies of that target (it goes "up" the build graph, instead of "down", if cc_binary is at the top).
That said, there are ways to change the way dependencies of a target are compiled, these are called "configuration transitions". (e.g. android_binary uses one to compile multiple cpu architectures in a single app, but going from [x86] to [x86, arm64] ~doubles as above). Configuration transitions are little involved though.
There are easier options like --copt on the command line, which affect the entire build, which might work. The below example might do what you need. It relies on the macro being surrounded by "#ifndef". It doesn't seem to work without the #ifndef, some cursory searching says macros in source code can't be replaced on the command line (not a c/c++ expert myself). If overriding the macro really requires it being in a source file, that's tricky because there's no straight forward way to add a file to the inputs of every cc action (or some subset). If it does work, you can add build --copt=... to a .bazelrc file at the root of your workspace so you don't have to add it to the command line. Otherwise, the only other thing I can think of is a custom toolchain, as others have mentioned.
repo1$ bazel run foo
INFO: Analyzed target //:foo (1 packages loaded, 160 targets configured).
INFO: Found 1 target...
Target //:foo up-to-date:
bazel-bin/foo
INFO: Elapsed time: 0.318s, Critical Path: 0.09s
INFO: 5 processes: 1 internal, 4 linux-sandbox.
INFO: Build completed successfully, 5 total actions
INFO: Build completed successfully, 5 total actions
bar is 16
repo1$ bazel run foo --copt=-DBAZ=15
INFO: Build option --copt has changed, discarding analysis cache.
INFO: Analyzed target //:foo (0 packages loaded, 160 targets configured).
INFO: Found 1 target...
Target //:foo up-to-date:
bazel-bin/foo
INFO: Elapsed time: 0.252s, Critical Path: 0.09s
INFO: 5 processes: 1 internal, 4 linux-sandbox.
INFO: Build completed successfully, 5 total actions
INFO: Build completed successfully, 5 total actions
bar is 38
repo1/BUILD:
cc_binary(
name = "foo",
srcs = ["foo.c", "foo.h"],
deps = ["#repo2//:bar"],
)
repo1/WORKSPACE:
local_repository(
name = "repo2",
path = "../repo2",
)
repo1/foo.c:
#include "foo.h"
#include "bar.h"
#include "stdio.h"
int main() {
printf("bar is %d\n", bar());
return 0;
}
repo1/foo.h:
repo2/BUILD:
cc_library(
name = "bar",
srcs = ["bar.c"],
hdrs = ["bar.h"],
deps = [":baz"],
visibility = ["//visibility:public"],
)
cc_library(
name = "baz",
srcs = ["baz.c"],
hdrs = ["baz.h"],
visibility = ["//visibility:public"],
)
repo2/WORKSPACE:
repo2/bar.c:
#include "bar.h"
#include "baz.h"
int bar() {
return baz() + BAR;
}
repo2/bar.h:
#define BAR 8
int bar();
repo2/baz.c:
#include "baz.h"
int baz() {
return BAZ * 2;
}
repo2/baz.h:
#ifndef BAZ
#define BAZ 4
#endif
int baz();
I am migrating large legacy makefiles project to Bazel. Project used to copy all sources and headers into single "build dir" before build, and because of this all source and header files use single level includes, without any prefix (#include "1.hpp").
Bazel requires that modules (libraries) use relative path to header starting at WORKSPACE file, however my goal is to introduce Bazel build files, which require 0 modifications of a source code.
I use bazelrc to globally set paths to includes as if structure was flat:
.bazelrc:
build --copt=-Ia/b/c
/a/b/BUILD
cc_library(
name = "lib",
srcs = ["c/1.cpp"],
hdrs = ["c/1.hpp"],
visibility = ["//visibility:public"]
)
When I build this target, I see my -I flag in compiler invocation, but compilation fails because bazel can not find header 1.hpp:
$ bazel build -s //a/b:lib
...
a/b/c/1.cpp:13:10: fatal error: 1.hpp: No such file or directory
13 | #include "1.hpp"
|
Interestingly enough, it prints me gcc command that it invokes during build and if I run this command, compiler is able to find 1.hpp and 1.cpp compiles.
How to make bazel "see" this includes? Do I really need to additionally specify copts for every target in addition to global -I flags?
Bazel use sandboxing: for each action (compile a C++ file, link a library) the specific build directory is prepared. That directory contains only files (using symlinks and other Linux sorcery), which are explicitly defined as dependency/source/header for given target.
That trick with --copt=-Ia/b/c is a bad idea, because that option will work only for targets, which depend on //a/b:lib.
Use includes or strip_include_prefix attribute instead:
cc_library(
name = "lib",
srcs = ["c/1.cpp"],
hdrs = ["c/1.hpp"],
strip_include_prefix = "c",
visibility = ["//visibility:public"]
)
and add the lib as a dependency of every target, which need to access these headers:
cc_binary(
name = "some bin",
srcs = ["foo.cpp"],
deps = ["//a/b:lib"],
)
I have a genrule that takes "in" a config file, and spits "out" a large number of built files (*.so*s and *.h files):
genrule(
name = "ros_auto",
local = True,
srcs = [":package_list"],
outs = ["ros_built.tgz"],
tools = [":build_packages.bash"],
cmd = "$(location :build_packages.bash) $< $#"
)
Next up I need to take all of the files that are output from the above genrule, and create a cc_library from them like below (at the moment the only manageable way I can find to register the output files in the genrule is tarballing them & declaring the tar):
cc_library(
name = "ros",
srcs = glob(["lib/*.so*"]),
hdrs = glob([
"include/**/*.h",
"include/**/*.hpp",
]),
strip_include_prefix = "include",
visibility = ["//visibility:public"],
)
No matter where I look I seem to continue to find deadend after deadend (http_archive uses a download_and_extract method which assumes the *tgz is remote, the cc_library implementation is inaccessible / unextendible Java, etc.).
I would've thought the problem of "I have node A that generates a tonne of outfiles, which node B depends on" would be extremely common and have a simple solution. Am I missing something obvious?
Context:
I have this working with a local repository rule that takes in the local directory, and uses the cc_library rule above as the build_file parameter (but that means building the first step of this process has to be done completely outside of the Bazel build process which is not how this should be done):
new_local_repository(
name = "ros",
path = "/tmp/ros_debug",
build_file = "//ros:BUILD.bazel",
licenses = ["https://docs.ros.org/diamondback/api/licenses.html"]
)
A basic bazel philosophie is that a build should depend on unambigious inputs/dependencies and outputs strictly defined files in order to guarantee reproducable builds. Having a genrule generating a "random" number of files that a target should depend on is against this philosophy. Thats why you cant find a good way to achieve this.
A way to work around this is to have a single outs file in your genrule where you write down the names of the generated files (or a log of your script or whatever). Then you can define an additional filegroup containing all the generated files, using glob. Then add your genrule as dependency to the rule of node B and the filegroup to the srcs. Therefore it is guaranteed, that the files are generated before building the node B
I am writing a sample C++ project that uses Bazel to serve as an example idiom for other collaborators to follow.
Here is the repository: https://github.com/thinlizzy/bazelexample
I am interested to know if I am doing it 'right', more specifically about this file: https://github.com/thinlizzy/bazelexample/blob/38cc07931e58ff5a888dd6a83456970f76d7e5b3/demo/BUILD
when regarding to pick particular implementations.
cc_library(
name = "demo",
srcs = ["demo.cpp"],
deps = [
"//example:frontend",
],
)
cc_binary(
name = "main_win",
deps = [
":demo",
"//example:impl_win",
],
)
cc_binary(
name = "main_linux",
deps = [
":demo",
"//example:impl_linux",
],
)
Is this following a correct/expected idiom for Bazel projects? I am doing this way already for other projects, by concentrating all the platform-specific dependencies in separate targets and then the binaries just depend on them.
Someone in bazel-discuss list told me to use select, instead, but my attempts failed to 'detect' the operating system. I'm sure I did something wrong, but the lack of info and examples don't tell me much how to use it properly.
#bazel_tools contains predefined platform conditions:
$ bazel query #bazel_tools//src/conditions:all
#bazel_tools//src/conditions:windows_msys
#bazel_tools//src/conditions:windows_msvc
#bazel_tools//src/conditions:windows
#bazel_tools//src/conditions:remote
#bazel_tools//src/conditions:host_windows_msys
#bazel_tools//src/conditions:host_windows_msvc
#bazel_tools//src/conditions:host_windows
#bazel_tools//src/conditions:freebsd
#bazel_tools//src/conditions:darwin_x86_64
#bazel_tools//src/conditions:darwin
You can use them directly in the BUILD file:
cc_library(
name = "impl",
srcs = ["Implementation.cpp"] + select({
"#bazel_tools//src/conditions:windows": ["ImplementationWin.cpp"],
"#bazel_tools//src/conditions:darwin": ["ImplementationMacOS.cpp"],
"//conditions:default": ["ImplementationLinux.cpp"],
}),
# .. same for hdrs and data
)
cc_binary(
name = "demo",
deps = [":impl"],
)
See the documentation for select for details on the syntax.
Add a .bazelrc to your project. Add the lines build:vs2019 --cxxopt=/std:c++14 and build:gcc --cxxopt=-std=c++14. Build your code bazel build --config=msvc //... or bazel build --config=gcc //....
#Vertexwahn's answer caused some confusion on my end, so I hope this answer helps clarify a bit. While his answer does not directly tie into the question, it may be of use to others trying to build on entirely different platforms without file specific inclusions.
Here is a link to where I answered that particular question: How do I specify portable build configurations for different operating systems for Bazel?
In bazel documentation (https://docs.bazel.build/versions/master/cpp-use-cases.html) there's an example like this:
cc_library(
name = "build-all-the-files",
srcs = glob(["*.cc"])
hdrs = glob(["*.h"]),
)
How incremental it is? I.e. if I change only one of the *.cc files, will it rebuild the whole target or only what's required?
It will just recompile the modified file. Bazel will then link the library if the object file changes (so if you just change a comment, it may skip the link step).
You still have doubts?
Add the flag -s when you build and you will see what Bazel actually runs.