I have a project that I'm building using bazel. My application references another bazel repository as a dependency, in its BUILD file, let's call this #dep. I cannot make any changes to the code in #dep, but I need to override a C macro defined in one of the header files in #dep.
I thought about using the compiler option -D to define the symbol defined at the top of the header file I want to replace, which contains the C macro, and then using -include to include a different header file with my macro, for all files in #dep which are being compiled via bazel. But in cc_binary, there is no option for -include, and copt = [] will only work for the target being compiled and not for its dependencies.
I came across this post but unfortunately the solution was not posted - How to specify preprocessor includes in Bazel? (-include common_header.h)
I don’t get why they don’t provide a way to use -include on the compiler command line for each file compiled, via the cc_binary build rule. They have defines = [] after all.
In general bazel doesn't provide a lot of mechanisms like that because it can cause scaling issues in large repositories. E.g. say project1 and project2 depend on the same tree of 1000 cc_library targets. Building both of them together results in 1000 cc_library targets being built. But if cc_binary had a way to change the way its dependencies are compiled, and project2 did something like that, then building the two projects together now requires building 2000 cc_library targets, which doubles memory requirements, remote caching requirements, remote execution requirements, increases analysis time.
For the defines attribute, that affects things that depend on the target with defines, but doesn't affect the dependencies of that target (it goes "up" the build graph, instead of "down", if cc_binary is at the top).
That said, there are ways to change the way dependencies of a target are compiled, these are called "configuration transitions". (e.g. android_binary uses one to compile multiple cpu architectures in a single app, but going from [x86] to [x86, arm64] ~doubles as above). Configuration transitions are little involved though.
There are easier options like --copt on the command line, which affect the entire build, which might work. The below example might do what you need. It relies on the macro being surrounded by "#ifndef". It doesn't seem to work without the #ifndef, some cursory searching says macros in source code can't be replaced on the command line (not a c/c++ expert myself). If overriding the macro really requires it being in a source file, that's tricky because there's no straight forward way to add a file to the inputs of every cc action (or some subset). If it does work, you can add build --copt=... to a .bazelrc file at the root of your workspace so you don't have to add it to the command line. Otherwise, the only other thing I can think of is a custom toolchain, as others have mentioned.
repo1$ bazel run foo
INFO: Analyzed target //:foo (1 packages loaded, 160 targets configured).
INFO: Found 1 target...
Target //:foo up-to-date:
bazel-bin/foo
INFO: Elapsed time: 0.318s, Critical Path: 0.09s
INFO: 5 processes: 1 internal, 4 linux-sandbox.
INFO: Build completed successfully, 5 total actions
INFO: Build completed successfully, 5 total actions
bar is 16
repo1$ bazel run foo --copt=-DBAZ=15
INFO: Build option --copt has changed, discarding analysis cache.
INFO: Analyzed target //:foo (0 packages loaded, 160 targets configured).
INFO: Found 1 target...
Target //:foo up-to-date:
bazel-bin/foo
INFO: Elapsed time: 0.252s, Critical Path: 0.09s
INFO: 5 processes: 1 internal, 4 linux-sandbox.
INFO: Build completed successfully, 5 total actions
INFO: Build completed successfully, 5 total actions
bar is 38
repo1/BUILD:
cc_binary(
name = "foo",
srcs = ["foo.c", "foo.h"],
deps = ["#repo2//:bar"],
)
repo1/WORKSPACE:
local_repository(
name = "repo2",
path = "../repo2",
)
repo1/foo.c:
#include "foo.h"
#include "bar.h"
#include "stdio.h"
int main() {
printf("bar is %d\n", bar());
return 0;
}
repo1/foo.h:
repo2/BUILD:
cc_library(
name = "bar",
srcs = ["bar.c"],
hdrs = ["bar.h"],
deps = [":baz"],
visibility = ["//visibility:public"],
)
cc_library(
name = "baz",
srcs = ["baz.c"],
hdrs = ["baz.h"],
visibility = ["//visibility:public"],
)
repo2/WORKSPACE:
repo2/bar.c:
#include "bar.h"
#include "baz.h"
int bar() {
return baz() + BAR;
}
repo2/bar.h:
#define BAR 8
int bar();
repo2/baz.c:
#include "baz.h"
int baz() {
return BAZ * 2;
}
repo2/baz.h:
#ifndef BAZ
#define BAZ 4
#endif
int baz();
Related
I have a CMakeLists.txt in which I want to generate several source files (namely, versiondata.cpp and version.rc.inc, included by res.rc) that depends on the general environment (current git HEAD, gcc -v output, CMakeCache.txt itself, and so on).
If it depended just on some files, I would generate it using an add_custom_command directive with the relevant DEPENDS and OUTPUT clauses; however, it's a bit tricky to pinpoint exactly its file dependencies; ideally, I'd want to run my script every time I call make, updating the files only if needed; if the generated files have actually been touched, then the targets depending from them should be rebuilt (the script is careful not to overwrite the files if they would have the same content as before).
My first attempt was using an add_custom_command with a fake main output, like this:
add_custom_command(OUTPUT versiondata.cpp.fake versiondata.cpp version.rc.inc
COMMAND my_command my_options
COMMENT "Generating versiondata.cpp"
)
# ...
# explicitly set the dependencies of res.rc, as they are not auto-deduced
set_source_files_properties(res.rc PROPERTIES OBJECT_DEPENDS "${PROJECT_BINARY_DIR}/version.rc.inc;${PROJECT_SOURCE_DIR}/other_stuff.ico")
# ...
add_executable(my_executable WIN32 ALL main.cpp versiondata.cpp res.rc)
versiondata.cpp.fake is never really generated, so the custom command is always run. This worked correctly, but always rebuilt my_executable, as CMake for some reasons automatically touches the output files (if generated) even though my script left them alone.
Then I thought I might make it work using an add_custom_target, that is automatically "never already satisfied":
add_custom_target(versiondata BYPRODUCTS versiondata.cpp version.rc.inc
COMMAND my_command my_options
COMMENT "Generating versiondata.cpp"
)
# ...
# explicitly set the dependencies of res.rc, as they are not auto-deduced
set_source_files_properties(res.rc PROPERTIES OBJECT_DEPENDS "${PROJECT_BINARY_DIR}/version.rc.inc;${PROJECT_SOURCE_DIR}/other_stuff.ico")
# ...
add_executable(my_executable WIN32 ALL main.cpp versiondata.cpp res.rc)
The idea here is that the versiondata target should be "pulled in" from the targets that depend on its BYPRODUCTS, and should be always executed. This seems to work on CMake 3.20, and the BYPRODUCTS seem to have some effect because if I remove the dependencies from my_executable my script doesn't get called.
However, on CMake 3.5 I get
make[2]: *** No rule to make target 'version.rc.inc', needed by 'CMakeFiles/my_executable.dir/res.rc.res'. Stop.
and if I remove the explicit dependency from version.rc.inc it doesn't get generated at all
[ 45%] Building RC object CMakeFiles/my_executable.dir/res.rc.res
/co/my_executable/res.rc:386:26: fatal error: version.rc.inc: No such file or directory
#include "version.rc.inc"
^
compilation terminated.
/opt/mingw32-dw2/bin/i686-w64-mingw32-windres: preprocessing failed.
CMakeFiles/my_executable.dir/build.make:5080: recipe for target 'CMakeFiles/my_executable.dir/res.rc.res' failed
make[2]: *** [CMakeFiles/my_executable.dir/res.rc.res] Error 1
so I suspect that the fact that this works in 3.20 is just by chance.
Long story short: is there some way to make this work as I wish?
In CMake there are two types of dependencies:
Target-level dependency, between targets.
A target can be build only after unconditional building of all targets it depends on.
File-level dependency, between files.
If some file is older than one of its dependencies, the file will be regenerated using corresponded COMMAND.
The key factor is that checking for timestamp of dependent files is performed strictly after building of dependent targets.
For correct regeneration of versiondata.cpp file and executable based on it, one need both dependencies:
Target-level, which would ensure that versiondata custom target
will be built before the executable.
add_dependencies(my_executable versiondata)
File-level, which will ensure that the executable will be rebuilt whenever
file versiondata.cpp will be updated.
This dependency is created automatically by listing versiondata.cpp
among the sources for the executable.
Now about BYPRODUCTS.
Even without explicit add_dependencies, your code works on CMake 3.20 because BYPRODUCTS generates needed target-level dependency automatically.
This could be deduced from the description of DEPENDS option in add_custom_target/add_custom_command:
Changed in version 3.16: A target-level dependency is added if any dependency is a byproduct of a target or any of its build events in the same directory to ensure the byproducts will be available before this target is built.
and noting, that add_executable effectively depends on every of its source files.
Because given comment for DEPENDS is applicable only for CMake 3.16 and later,
in older CMake versions BYPRODUCTS does not create target-level dependency automatically, and one need to resort to explicit add_dependencies.
I am migrating large legacy makefiles project to Bazel. Project used to copy all sources and headers into single "build dir" before build, and because of this all source and header files use single level includes, without any prefix (#include "1.hpp").
Bazel requires that modules (libraries) use relative path to header starting at WORKSPACE file, however my goal is to introduce Bazel build files, which require 0 modifications of a source code.
I use bazelrc to globally set paths to includes as if structure was flat:
.bazelrc:
build --copt=-Ia/b/c
/a/b/BUILD
cc_library(
name = "lib",
srcs = ["c/1.cpp"],
hdrs = ["c/1.hpp"],
visibility = ["//visibility:public"]
)
When I build this target, I see my -I flag in compiler invocation, but compilation fails because bazel can not find header 1.hpp:
$ bazel build -s //a/b:lib
...
a/b/c/1.cpp:13:10: fatal error: 1.hpp: No such file or directory
13 | #include "1.hpp"
|
Interestingly enough, it prints me gcc command that it invokes during build and if I run this command, compiler is able to find 1.hpp and 1.cpp compiles.
How to make bazel "see" this includes? Do I really need to additionally specify copts for every target in addition to global -I flags?
Bazel use sandboxing: for each action (compile a C++ file, link a library) the specific build directory is prepared. That directory contains only files (using symlinks and other Linux sorcery), which are explicitly defined as dependency/source/header for given target.
That trick with --copt=-Ia/b/c is a bad idea, because that option will work only for targets, which depend on //a/b:lib.
Use includes or strip_include_prefix attribute instead:
cc_library(
name = "lib",
srcs = ["c/1.cpp"],
hdrs = ["c/1.hpp"],
strip_include_prefix = "c",
visibility = ["//visibility:public"]
)
and add the lib as a dependency of every target, which need to access these headers:
cc_binary(
name = "some bin",
srcs = ["foo.cpp"],
deps = ["//a/b:lib"],
)
I'm new to cmake, so up front, my apologies this question is too basic ..
Question is,
I have logs under #ifdef DEBUG which I want to print only for debug build.
Something like this ..
void func () {
// some code here
#ifdef DEBUG
print_log(...) // this portion should execute only for debug builds
#endif
// some code here
}
How can I achieve this with cmake ?
I have already looked at #ifdef DEBUG with CMake independent from platform and cmakelists debug flag not executing code within "ifdef DEBUG", suggestions here don't seem to work for me.
(project is on Linux platform)
Modern CMake uses a target-based approach which allows you to specify settings that are restricted to a target as opposed to being global (and affecting all targets). This then gives you the control to specify how the state from targets propagates transitively to dependent targets so as to reduce the visible scope of the state (include paths, library dependencies, compiler defines, compiler flags, etc) to dependent targets. The approach you decide to go with will depend largely on how complex your application is, for instance, how many targets (executable & libraries) exist in the system. The more complex the system the more benefits you in terms of reduced complexity and compile time that you will get from using the target-based approach. In the simplest case to set this up with the modern target based CMake approach you could use the following (where exe is the name of your executable:
add_executable(exe "")
target_sources(exe
PRIVATE
main.cpp
)
target_compile_definitions(exe
PRIVATE
# If the debug configuration pass the DEBUG define to the compiler
$<$<CONFIG:Debug>:-DDEBUG>>
)
Added set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} -DDEBUG") to CMakeLists.txt
Created Debug/ directory & cd Debug
cmake -DCMAKE_BUILD_TYPE="Debug" ../
Worked !
I have a cc_library that is header-only. Whenever I try to compile such library by itself, it won't actually compile anything. I purposely put some errors to try to get such errors on compilation, but bazel doesn't actually compile anything. Here's a small example.
// test.h
This should not compile fdsafdsafdsa
int foo() { return 1; }
# BUILD
cc_library(
name = 'test',
hdrs = ['test.h']
)
// bazel build :test
INFO: Analyzed target //:test (2 packages loaded, 3 targets configured).
INFO: Found 1 target...
Target //:test up-to-date (nothing to build)
INFO: Elapsed time: 0.083s, Critical Path: 0.00s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action
Is this behavior intended?
I also ran the same experiment but splitting the .h and .cc files, and in that case, I got the error when I compiled.
cc_library (other rules as well incl. pkg_tar for instance) does not have to have any sources. This is also valid:
cc_library(
name = "empty",
srcs = [],
)
And it is actually quite useful too. You may have configurable attributes such as deps (or srcs) where actual content is only applicable for certain conditions:
cc_binary(
name = "mybinary",
srcs = ["main.c"],
deps = select({
":platform1": [":some_plat1_only_lib"],
":platform2": [":empty"], # as defined in the above snippet
}),
)
Or (since above you could have just as well used [] for :platform2 deps) where you have a larger tree and you expect developers to just depend on //somelib:somelib, you could use this empty library through an alias to give them a single label without having to worry about all the platform specific details and how providing certain function is handled where:
# somelib/BUILD:
alias(
name = "somelib",
actual = select({
":platform1": [":some_plat1_only_lib"],
":platform2": [":empty"], # as defined in the above snippet
}),
visibility = ["//visibility:public"], # set appropriately
)
And mybinary or any other target could now say:
cc_binary(
name = "mybinary",
srcs = ["main.c"],
deps = ["//somelib"],
)
And of course, as the other answer here states, there are header only libraries.
Also in the example you've used in your question. (bazel or not) you would normally not (and it would not be very useful either) compile a header file on its own. You would only use its content and only then see the compiler fails as you attempt to build a source the header is #included from. That is for bazel build to fail, another target would have to depend on test and #include "test.h".
A header only library means that you don't need to build anything. Simply include the headers you need in your program and use them.
For many reasons, I prefer Boost.UTF to gtest (or other alternatives).
I recently decided to use Bazel as my build system, and since I'm essentially at tutorial level, I looked online for a way to use Boost in Bazel, but none of them seems to handle for Boost.UTF, and since this library is not header only (like the ones handled in https://github.com/nelhage/rules_boost), I am not sure how to proceed.
How can I add Boost.UTF to Bazel, so I can use it for my test modules?
Any hint is welcome, thanks.
P.S.
The only way to work around the issue I see is to try to install boost on the machine I build with and try to have Bazel use that. I guess that is how it deals with the standard libs anyway.
EDIT:
This is the code of my unit test.
btest.cpp
#define BOOST_TEST_MODULE CompactStateTest
#include <boost/test/included/unit_test.hpp>
BOOST_AUTO_TEST_SUITE(Suite1)
BOOST_AUTO_TEST_CASE(Test1)
{
int x(0);
BOOST_CHECK_EQUAL(x, 0);
}
BOOST_AUTO_TEST_SUITE_END()
BUILD (the "Makefile" for bazel)
cc_test(
name = "btest",
srcs = ["btest.cpp",],
deps = ["#boost//:test",],
)
From bazel's (and cc_test's) point of view, a test is a binary that returns non-zero exit code when it fails, possibly (not obligatory) writing an xml file specified by XML_OUTPUT_FILE env var set at the test execution time with the xml test results.
So your goal is to write cc_test rule with all the deps set, so bazel can compile and run it. For that you will need to add a dependency to cc_library for Boost.UTF. This will be a standard bazel cc_library with hdrs and srcs (and/or deps).
I'm anticipating your next question on how to depend files installed on your local system, for that you will find local_repository (and it's new_ variant) useful.