Generation of jacoco.exec using bazel not permitted in other paths except /tmp - jacoco

In my BUILD.bazel my java_test looks like this:
java_test(
name = "SomeServiceTest",
srcs = [
"src/test/java/com/service/SomeServiceTest.java",
],
test_class = "com.service.SomeServiceTest",
deps = [
"SomeService",
"#junit_junit//jar",
"#commons_logging_commons_logging//jar",
"#org_hamcrest_hamcrest_core//jar",
"#com_fasterxml_jackson_core_jackson_annotations//jar",
"#javax_servlet_javax_servlet_api//jar",
"#org_springframework_spring_aop//jar",
"#org_springframework_spring_beans//jar",
"#org_springframework_spring_context//jar",
"#org_springframework_spring_test//jar",
"#org_springframework_spring_web//jar",
"#org_mockito_mockito_core//jar",
"#net_bytebuddy_byte_buddy//jar",
],
size = "medium",
jvm_flags = ["-javaagent:$$workspacepath/jacocoagent-runtime.jar=destfile=$$workspacepath/jacoco.exec"]
)
I want to make the path of jacocoagent-runtime.jar and the path where the jacoco.exec will be generated to be dynamic, thus, the jvm_flags setup. I defined $$workspacepath in my execution of bazel test below:
bazel test --test_output=all --action_env=workspacepath=/Users/Someone/Desktop some-service:all_tests
Now, I am getting the error below:
java.io.FileNotFoundException: /Users/Someone/Desktop/jacoco.exec (Operation not permitted)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
at org.jacoco.agent.rt.internal_290345e.output.FileOutput.openFile(FileOutput.java:67)
at org.jacoco.agent.rt.internal_290345e.output.FileOutput.writeExecutionData(FileOutput.java:53)
at org.jacoco.agent.rt.internal_290345e.Agent.shutdown(Agent.java:137)
at org.jacoco.agent.rt.internal_290345e.Agent$1.run(Agent.java:54)
If I change the workspacepath to /tmp, it works fine. What is wrong with other paths other than /tmp?

I agree with #Godin -- sounds like the input path is not in the sandbox. Does --spawn_strategy=standalone[1] help?
If that's indeed the problem then to fix the build with sandboxing you need to make the .jar file an input of the java_test's action and reference its path correctly from the jvm_flags.
To do that:
either create a new package in your workspace and copy the jacoco jar there, or add a new_local_repository rule to your WORKSPACE file and reference the jar's directory and specify the build_file_contents attribute as exports_files(["jacoco-runtime.jar"])
now that you can reference Jacoco by a label (e.g. #jacoco//:jacoco-runtime.jar) you need to add it to the java_test rule's data attribute
finally you need to change the java_test rule's jvm_flags attribute to reference the jar using $(location <label>), e.g. $(location #jacoco//:jacoco-runtime.jar)
[1] https://docs.bazel.build/versions/master/user-manual.html#flag--spawn_strategy

Related

Writeable data files in Bazel tests

I use Bazel and googletest in my C++ project. I'd like to write some tests that require opening some files with initial data and possibly modifying those files. I'd like the tests to not overwrite the original files obviously. My current (wrong) rules are like this:
filegroup(
name = "test_data",
srcs = glob(["test_data/*"]),
)
cc_test(
name = "sample_test",
srcs = ["sample_test.cc"],
data = [":test_data"],
deps = [ ... ]
)
In the sample_test.cc I try to open a file from test_data/ with RW permissions. Running bazel test //sample_test fails, as open in sample_test.cc returns EROFS (read-only filesystem). I can open the files read-only.
I found this: https://bazel.build/reference/test-encyclopedia#test-interaction-filesystem. It seems test files can only write to some very specific TEST_TMPDIR directory. Is this possible then to make bazel copy the test data files to this directory before running each test?
I guess I could create a fixture and copy the data files to the tmp directory, but this seems like a hack solution and I'd have to add this logic to every test file. It'd be much better to do it from bazel build files directly.

How can I implement a command line option in Bazel to switch between which dependency version is used for building?

For some background, the C++ program I am working on has the possibility to interoperate with some other applications that use various Protobuf versions. In the source code for my program, I have the compiled .pb.cc files from these other applications for the Protobuf interface. These .pb.cc files were compiled with a particular version of Protobuf, and I don't have any control over this. I am using Bazel to build, and I want to be able to specify a Bazel build configuration for my program, which will use a particular version of Protobuf which matches that of one of the possible other applications.
Originally, I wanted to put something in the .bazelrc file so that I can specify a particular version of Protobuf depending on the config, for example:
# in .bazelrc:
build:my_config --protobuf_version=3_20_1
build:my_other_config --protobuf_version=3_21_6
Then from the terminal, I could build with the command
bazel build --config=my_config //path/to/target:target
which would build as if I had typed
bazel build --protobuf_version=3_20_1 //path/to/target:target
At this point, I wanted to use the select() function, as detailed in the Bazel docs for Configurable Build Attributes, to use a particular Protobuf version during building. But, the Protobuf dependencies are all specified in the WORKSPACE file, which is more limited than a BUILD file, and this select() function cannot be used there. So then my idea was to pull in every version of the Protobuf library that I would possibly need, and give them different names in the WORKSPACE file, and then in the BUILD files, use a select() function to choose the correct version. But, the Bazel rule for compiling the proto_library is used as such:
proto_library(
name = "foo",
srcs = ["foo.proto"],
strip_import_prefix = "/foo/bar/baz",
)
I don't see of any opportunity to use a select() function here to specify which Protobuf version's proto_library rule should be used. The proto_library rule is also defined in from the WORKSPACE file with:
load("#rules_proto//proto:repositories.bzl", "rules_proto_dependencies", "rules_proto_toolchains")
rules_proto_dependencies()
rules_proto_toolchains()
Now, I would say that I am stuck. I don't see a way to specify on the command line which version of Protobuf should be used with the proto_library rule.
In the end, I would like a way to do the equivalent in the WORKSPACE file of
# in WORKSPACE
if my_config:
# specific protobuf version:
http_archive(
name = "com_google_protobuf",
sha256 = "8b28fdd45bab62d15db232ec404248901842e5340299a57765e48abe8a80d930",
strip_prefix = "protobuf-3.20.1",
urls = ["https://github.com/protocolbuffers/protobuf/archive/v3.20.1.tar.gz"],
)
elif my_other_config:
# same as above, but with different version
else:
# same as above, but with default version
According to some google groups discussion, this doesn't seem to be possible in the WORKSPACE file, so I would need to do it in a BUILD file, but the dependencies are specified in the WORKSPACE.
I figured out a way that works that seems to go against Bazel's philosophy, but most importantly does what I want.
The repository dependencies are loaded in the first of two steps, the first involving the WORKSPACE file, and the second involving the BUILD file. Command line flags for the build cannot be normally be directly passed to the WORKSPACE, but it is possible to get some information to the WORKSPACE by setting an environment variable and creating a repository_rule. In the WORKSPACE, this environment variable can be used, for example, to change the url argument to http_archive which specifies the dependency version.
This repository rule is created in a separate file .bzl file, which is then loaded in the WORKSPACE. As a generalized example of how get environment variable values into the WORKSPACE, the following file my_repository_rule.bzl could be created:
# in file my_repository_rule.bzl
def _my_repository_rule_impl(repository_ctx):
# read the particular environment variable we are interested in
config = repository_ctx.os.environ.get("MY_CONFIG_ENV_VAR", "")
# necessary to create empty BUILD file for this rule
# which will be located somewhere in the Bazel build files
repository_ctx.file("BUILD")
# some logic to do something based on the value of the environment variable passed in:
if config.lower() == "example_config_1":
ADDITIONAL_INFO = "foo"
elif config.lower() == "example_config_2":
ADDITIONAL_INFO = "bar"
else:
ADDITIONAL_INFO = "baz"
# create a temporary file called config.bzl to be loaded into WORKSPACE
# passing in any desired information from this rule implementation
repository_ctx.file("config.bzl", content = """
MY_CONFIG = {}
ADDITIONAL_INFO = {}
""".format(repr(config), repr(ADDITIONAL_INFO ))
)
my_repository_rule = repository_rule(
implementation=_my_repository_rule_impl,
environ = ["MY_CONFIG_ENV_VAR"]
)
This can be used in the WORKSPACE as such:
# in file WORKSPACE
load("//:my_repository_rule.bzl", "my_repository_rule ")
my_repository_rule(name = "local_my_repository_rule ")
load("#local_my_repository_rule //:config.bzl", "MY_CONFIG", "ADDITIONAL_INFO")
print("MY_CONFIG = {}".format(MY_CONFIG))
print("ADDITIONAL_INFO = {}".format(ADDITIONAL_INFO))
When a target is built with bazel build, the WORKSPACE will receive the value of the MY_CONFIG_ENV_VAR from the terminal and store it in the Starlark variable MY_CONFIG, and any other additional information determined in the implementation.
The environment variable can be passed by normal means, such as typing in a bash shell, for example:
MY_CONFIG_ENV_VAR=example_config_1 bazel build //path/to/target:target
It can also be passed as a flag with the --repo_env flag. This flag sends an extra environment variable to be available to the repository rules, meaning the following is equivalent:
bazel build --repo_env=MY_CONFIG_ENV_VAR=example_config_1 //path/to/target:target
This can be made easier to switch between by including the following in the .bazelrc file:
# in file .bazelrc
build:my_config_1 --repo_env=MY_CONFIG_ENV_VAR=example_config_1
build:my_config_2 --repo_env=MY_CONFIG_ENV_VAR=example_config_2
So running bazel build --config=my_config_1 //path/to/target:target will show the debug output from the print statements in WORKSPACE as the following:
MY_CONFIG = example_config_1
ADDITIONAL_INFO = foo
If ADDITIONAL_INFO in the rule implementation (in the file my_repository_rule.bzl) were set to a version number such as "3.20.1", then the WORKSPACE could, for example, use this in an http_archive call to pull the desired version of the dependency.
# in file WORKSPACE
if ADDITIONAL_INFO == "3.20.1":
sha256 = "8b28fdd45bab62d15db232ec404248901842e5340299a57765e48abe8a80d930"
http_archive(
name = "com_google_protobuf",
sha256 = sha256,
strip_prefix = "protobuf-{}".format(ADDITIONAL_INFO),
urls = ["https://github.com/protocolbuffers/protobuf/archive/v{}.tar.gz".format(ADDITIONAL_INFO)],
)
Of course, the value of the sha256 kwarg could also be passed in from the repository rule as a separate string variable, or as part of a dictionary, for example.

Conan package manager - how to remove folders during conan install?

I have a local conanfile.py to consume a package, the package is already located on the local cache (~/.conan/).
In the conanfile.py there is the imports() function in which I copy some files from the package into my build folder.
I have two files with the same name in different directories and I copy them to the same directory and rename one of them.
After I do that, I am left with an empty directory I want to remove, but can't find a way to do so from conanfile.py, every attempt seems to remove the folder before the files gets run. My imports looks as follows:
class SomeConanPkg(ConanFile):
name = "SomeName"
description = "SomeDesc"
requires = (
"SomePkg/1.0.0.0#SomeRepo/stable")
def imports(self):
# copy of 1st file
self.copy("somefile.dll", src=os.path.join("src"), dst=os.path.join(build_dest))
# copy of 2nd file to nested directory
self.copy("somefile.dll", src=os.path.join("src", "folder"), dst=os.path.join(build_dst, "folder"))
# move and rename the file to parent directory
shutil.copy2(os.path.join(build_dst, "folder", "somefile.dll"), os.path.join(build_dst, "renamed_file.dll"))
# now build_dst/folder is an empty directory
I have tried to use conan tools.rmmdir() or just calling shutil.rmmtree() but all of them seems to run before the files gets copied.
I also tried to add a package() or deploy() member functions and execute the remove inside but these methods don't seem to run at all (verified with a debug print).
Any ideas?
I ended us solving it in the package creation side.
Renamed the files as I wanted and then just consumed them
Try conan remove <package name> . If you do not know the exact package name you can also use conan search to see the list of packages before you use conan remove.

How to write files to current directory instead of bazel-out

I have the following directory structure:
my_dir
|
--> src
| |
| --> foo.cc
| --> BUILD
|
--> WORKSPACE
|
--> bazel-out/ (symlink)
|
| ...
src/BUILD contains the following code:
cc_binary(
name = "foo",
srcs = ["foo.cc"]
)
The file foo.cc creates a file named bar.txt using the regular way with <fstream> utilities.
However, when I invoke Bazel with bazel run //src:foo the file bar.txt is created and placed in bazel-out/darwin-fastbuild/bin/src/foo.runfiles/foo/bar.txt instead of my_dir/src/bar.txt, where the original source is.
I tried adding an outs field to the foo rule, but Bazel complained that outs is not a recognized attribute for cc_binary.
I also thought of creating a filegroup rule, but there is no deps field where I can declare foo as a dependency for those files.
How can I make sure that the files generated by running the cc_binary rule are placed in my_dir/src/bar.txt instead of bazel-out/...?
Bazel doesn't allow you to modify the state of your workspace, by design.
The short answer is that you don't want the results of the past builds to modify the state of your workspace, hence potentially modifying the results of the future builds. It'll violate reproducibility if running Bazel multiple times on the same workspace results in different outputs.
Given your example: imagine calling bazel run //src:foo which inserts
#define true false
#define false true
at the top of the src/foo.cc. What happens if you call bazel run //src:foo again?
The long answer: https://docs.bazel.build/versions/master/rule-challenges.html#assumption-aim-for-correctness-throughput-ease-of-use-latency
Here's more information on the output directory: https://docs.bazel.build/versions/master/output_directories.html#documentation-of-the-current-bazel-output-directory-layout
There could be a workaround to use genrule. Below is an example that I use genrule to copy a file to the .git folder.
genrule(
name = "precommit",
srcs = glob(["git/**"]),
outs = ["precommit.txt"],
# folder contain this BUILD.bazel file is tool which will be symbol linked, we use cd -P to get to the physical path
cmd = "echo 'setup pre-commit.sh' > $(OUTS) && cd -P tools && ./path/to/your-script.sh",
local = 1, # required
)
If you're passing the name of the output file in when running, you can simply use absolute paths. To make this easier, you can use the realpath utility if you're in linux. If you're on a mac, it is included in brew install coreutils. Then running it looks something like:
bazel run my_app_dir:binary_target -- --output_file=`realpath relative/path/to.output
This has been discussed and explained in a Bazel issue. Recommendation is to use a tool external to Bazel:
As I understand the use-case, this is out-of-scope for building and in the scope of, perhaps, workspace configuration. What I'm sure of is that an external tool would be both easier and safer to write for this purpose, than to introduce such a deep design change to Bazel.
The tool would copy the files from the output tree into the source tree, and update a manifest file (also in the source tree) that lists the path-digest pairs. The sources and the manifest file would all be versioned. A genrule or a sh_test would depend on the file-generating genrules, as well as on this manifest file, and compare the file-generating genrules' outputs' digests (in the output tree) to those in the manifest file, and would fail if there's a mismatch. In that case the user would need to run the external tool, thus update the source tree and the manifest, then rerun the build, which is the same workflow as you described, except you'd run this tool instead of bazel regenerate-autogenerated-sources.

Building .proto and move .h

I am trying to generate the C files from my .proto.
I was able to do it but they are generated in the same folder. After this i need to move the .h file into my include directory.
At this moment i am using the ProtoC Builder but there is no option to deliver the .h into a different folder. So i tried to do a Command Move after something like:
proto_files = localenv.Protoc(
[],
protoList,
PROTOCPROTOPATH=[builddir],
PROTOCPYTHONOUTDIR=None, # set to None to not generate python
PROTOCOUTDIR = builddir, # defaults to same directory as .proto
# PROTOCCPPOUTFLAGS = "dllexport_decl=PROTOCONFIG_EXPORT:", too
)
localenv.Command(proto_files[1], proto_files[1],
[
Move("$SRC",incFolder+"/$TARGET"),
])
but when i run scons i have the following error:
scons: * Two environments with different actions were specified for
the same target:
Any idea?
You can't have a Command (or any Builder) with the target and source being the same. How would SCons know whether it was up to date, i.e. whether that builder needs to run or not?
Maybe try:
tgt = localenv.Command(os.path.join(incFolder, proto_files[1]), proto_files[1],
[
Move("$SRC","$TARGET"),
])
If that doesn't work, please add the full error message (this time include the target name).