In the process of building project with Mix I have the req of placing the result of the build in another dir.
Mix normally places the build artefacts in /project_path/_build.
I cannot write anything to /project_path during the actual build.
Can I change the output dir? Is this something that can be easily adjusted?
Based on this, you can specify a keyword list member build_path to override the default _build directory.
Example:
def project do
[app: :my_app,
version: "0.0.1",
elixir: "~> 1.2",
build_embedded: Mix.env == :prod,
start_permanent: Mix.env == :prod,
build_path: "custom_build_dir",
deps: deps]
end
Related
For some background, the C++ program I am working on has the possibility to interoperate with some other applications that use various Protobuf versions. In the source code for my program, I have the compiled .pb.cc files from these other applications for the Protobuf interface. These .pb.cc files were compiled with a particular version of Protobuf, and I don't have any control over this. I am using Bazel to build, and I want to be able to specify a Bazel build configuration for my program, which will use a particular version of Protobuf which matches that of one of the possible other applications.
Originally, I wanted to put something in the .bazelrc file so that I can specify a particular version of Protobuf depending on the config, for example:
# in .bazelrc:
build:my_config --protobuf_version=3_20_1
build:my_other_config --protobuf_version=3_21_6
Then from the terminal, I could build with the command
bazel build --config=my_config //path/to/target:target
which would build as if I had typed
bazel build --protobuf_version=3_20_1 //path/to/target:target
At this point, I wanted to use the select() function, as detailed in the Bazel docs for Configurable Build Attributes, to use a particular Protobuf version during building. But, the Protobuf dependencies are all specified in the WORKSPACE file, which is more limited than a BUILD file, and this select() function cannot be used there. So then my idea was to pull in every version of the Protobuf library that I would possibly need, and give them different names in the WORKSPACE file, and then in the BUILD files, use a select() function to choose the correct version. But, the Bazel rule for compiling the proto_library is used as such:
proto_library(
name = "foo",
srcs = ["foo.proto"],
strip_import_prefix = "/foo/bar/baz",
)
I don't see of any opportunity to use a select() function here to specify which Protobuf version's proto_library rule should be used. The proto_library rule is also defined in from the WORKSPACE file with:
load("#rules_proto//proto:repositories.bzl", "rules_proto_dependencies", "rules_proto_toolchains")
rules_proto_dependencies()
rules_proto_toolchains()
Now, I would say that I am stuck. I don't see a way to specify on the command line which version of Protobuf should be used with the proto_library rule.
In the end, I would like a way to do the equivalent in the WORKSPACE file of
# in WORKSPACE
if my_config:
# specific protobuf version:
http_archive(
name = "com_google_protobuf",
sha256 = "8b28fdd45bab62d15db232ec404248901842e5340299a57765e48abe8a80d930",
strip_prefix = "protobuf-3.20.1",
urls = ["https://github.com/protocolbuffers/protobuf/archive/v3.20.1.tar.gz"],
)
elif my_other_config:
# same as above, but with different version
else:
# same as above, but with default version
According to some google groups discussion, this doesn't seem to be possible in the WORKSPACE file, so I would need to do it in a BUILD file, but the dependencies are specified in the WORKSPACE.
I figured out a way that works that seems to go against Bazel's philosophy, but most importantly does what I want.
The repository dependencies are loaded in the first of two steps, the first involving the WORKSPACE file, and the second involving the BUILD file. Command line flags for the build cannot be normally be directly passed to the WORKSPACE, but it is possible to get some information to the WORKSPACE by setting an environment variable and creating a repository_rule. In the WORKSPACE, this environment variable can be used, for example, to change the url argument to http_archive which specifies the dependency version.
This repository rule is created in a separate file .bzl file, which is then loaded in the WORKSPACE. As a generalized example of how get environment variable values into the WORKSPACE, the following file my_repository_rule.bzl could be created:
# in file my_repository_rule.bzl
def _my_repository_rule_impl(repository_ctx):
# read the particular environment variable we are interested in
config = repository_ctx.os.environ.get("MY_CONFIG_ENV_VAR", "")
# necessary to create empty BUILD file for this rule
# which will be located somewhere in the Bazel build files
repository_ctx.file("BUILD")
# some logic to do something based on the value of the environment variable passed in:
if config.lower() == "example_config_1":
ADDITIONAL_INFO = "foo"
elif config.lower() == "example_config_2":
ADDITIONAL_INFO = "bar"
else:
ADDITIONAL_INFO = "baz"
# create a temporary file called config.bzl to be loaded into WORKSPACE
# passing in any desired information from this rule implementation
repository_ctx.file("config.bzl", content = """
MY_CONFIG = {}
ADDITIONAL_INFO = {}
""".format(repr(config), repr(ADDITIONAL_INFO ))
)
my_repository_rule = repository_rule(
implementation=_my_repository_rule_impl,
environ = ["MY_CONFIG_ENV_VAR"]
)
This can be used in the WORKSPACE as such:
# in file WORKSPACE
load("//:my_repository_rule.bzl", "my_repository_rule ")
my_repository_rule(name = "local_my_repository_rule ")
load("#local_my_repository_rule //:config.bzl", "MY_CONFIG", "ADDITIONAL_INFO")
print("MY_CONFIG = {}".format(MY_CONFIG))
print("ADDITIONAL_INFO = {}".format(ADDITIONAL_INFO))
When a target is built with bazel build, the WORKSPACE will receive the value of the MY_CONFIG_ENV_VAR from the terminal and store it in the Starlark variable MY_CONFIG, and any other additional information determined in the implementation.
The environment variable can be passed by normal means, such as typing in a bash shell, for example:
MY_CONFIG_ENV_VAR=example_config_1 bazel build //path/to/target:target
It can also be passed as a flag with the --repo_env flag. This flag sends an extra environment variable to be available to the repository rules, meaning the following is equivalent:
bazel build --repo_env=MY_CONFIG_ENV_VAR=example_config_1 //path/to/target:target
This can be made easier to switch between by including the following in the .bazelrc file:
# in file .bazelrc
build:my_config_1 --repo_env=MY_CONFIG_ENV_VAR=example_config_1
build:my_config_2 --repo_env=MY_CONFIG_ENV_VAR=example_config_2
So running bazel build --config=my_config_1 //path/to/target:target will show the debug output from the print statements in WORKSPACE as the following:
MY_CONFIG = example_config_1
ADDITIONAL_INFO = foo
If ADDITIONAL_INFO in the rule implementation (in the file my_repository_rule.bzl) were set to a version number such as "3.20.1", then the WORKSPACE could, for example, use this in an http_archive call to pull the desired version of the dependency.
# in file WORKSPACE
if ADDITIONAL_INFO == "3.20.1":
sha256 = "8b28fdd45bab62d15db232ec404248901842e5340299a57765e48abe8a80d930"
http_archive(
name = "com_google_protobuf",
sha256 = sha256,
strip_prefix = "protobuf-{}".format(ADDITIONAL_INFO),
urls = ["https://github.com/protocolbuffers/protobuf/archive/v{}.tar.gz".format(ADDITIONAL_INFO)],
)
Of course, the value of the sha256 kwarg could also be passed in from the repository rule as a separate string variable, or as part of a dictionary, for example.
I'm currently messing around with Google's Mediapipe, which uses Bazel as a build tool. The folder has the following structure:
mediapipe
├ mediapipe
| └ examples
| └ desktop
| └ hand_tracking
| └ BUILD
├ calculators
| └ tensor
| └ tensor_to_landmarks_calculator.cc
| └ BUILD
└ WORKSPACE
There are a bunch of other files in there as well, but they are rather irrelevant to this problem. They can be found in the git repo linked above if you need them.
I'm at a stage where I can build and run the hand_tracking example without any problems. Now, I want to include the cereal library to the build, so that I can use #include <cereal/archives/binary.hpp> from within tensors_to_landmarks_calculator.cc. The cereal library is located at C:\\cereal, but can be moved to other locations if it simplifies the process.
Basically, I'm looking for the Bazel equivalent of adding a path to Additional Include Directories in Visual Studio.
How would I need to modify the WORKSPACE and BUILD files in order to include the library in my project, assuming they are in a default state?
Unfortunately, this official doc page only covers one-file libraries, and other implementations kept giving me File could not be found errors at build time.
Thanks in advance!
First you have to tell Bazel about the code living "outside" the
workspace area. It needs to know how to find it, how to build it, and
what to call it, etc. These are known as remote repositories. They
can be local to your disk (outside the Bazel workspace area), or
actually remote on another machine or server, like github. The
important thing is it must be described to Bazel with enough
information that it can use.
As most third party code does not come with BUILD.bazel files, you may
need to provide one yourself and tell Bazel "use this as if it was a
build file found in that code."
For a local directory outside your bazel project
Add a repository rule like this to your WORKSPACE file:
# This could go in your WORKSPACE file
# (But prefer the http_archive solution below)
new_local_repository(
name = "cereal",
build_file = "//third_party:cereal.BUILD.bazel",
path = "<path-to-directory>",
)
("new_local_repository" is built-in to bazel)
Somewhere under your Bazel WORKSPACE area you'll also need to make a
cereal.BUILD.bazel file and export it from the package. I choose a directory called //third_party, but you can put it anywhere
else, and name it anything else, as long as the repository rule
provides a proper bazel label for it.) The contents might look like
this:
# contents of //third_party/cereal.BUILD.bazel
cc_library(
name = "cereal-lib",
srcs = glob(["**/*.hpp"]),
includes = ["include"],
visibility = ["//visibility:public"],
)
Bazel will pretend this was the BUILD file that "came with" the remote
repository, even though it's actually local to your repo. When Bazel fetches this remote repostiory code it copies it, and the BUILD file you provide, into its external area for caching, building, etc.
To make //third_party:cereal.BUILD.bazel a valid target in your directory, add a BUILD.bazel file to that directory:
# contents of //third_party/BUILD.bazel
exports_files = [
"cereal.BUILD.bazel",
]
Without exporting it, you won't be able to refer to the buildfile from your repository rule.
Local disk repositories aren't very portable since people may have
different versions installed and it's not very hermetic (making it
hard to share caches of builds with others), and it requires they put
them in the same place, and that kind of setup can be problematic. It
also will fail when you mix operating systems, etc, if you refer to it as "C:..."
Downloading a tarball of the library from github, for example
A better way is to download a fixed version from github, for example,
and let Bazel manage it for you in its external area:
http_archive(
name = "cereal",
sha256 = "329ea3e3130b026c03a4acc50e168e7daff4e6e661bc6a7dfec0d77b570851d5",
urls =
["https://github.com/USCiLab/cereal/archive/refs/tags/v1.3.0.tar.gz"],
build_file = "//third_party:cereal.BUILD.bazel",
)
The sha256 is important, since it downloads and computes it, compares to what you specified, and can cache it. In the future, it won't re-download it if the local file's sha matches.
Notice, it again says build_file = //third_party:cereal.BUILD.bazel., all
the same things from new_local_repository above apply here. Make sure you provide the build file for it to use, and export it from where you put it.
*To test that the remote repository is setup ok
on the command line issue
bazel fetch #cereal//:cereal-lib
I sometimes have to clear it out to make it try again, if my rule isn't quite right, but the "bad" version sticks around.
bazel clean --expunge
will remove it, but might be overkill.
Finally
We have:
defined a remote repository called #cereal
defined a target in it called cereal-lib
the target is thus #cereal//:cereal-lib
To use it
Go to the package where you would like to include cereal, and add a
dependency on this repository to the rule that builds the c++ code that would like to use cereal. That is, in your case, the BUILD rule that causes tensor_to_landmarks_calculator.cc to get built, add:
deps = [
"#cereal//:cereal-lib",
...
]
And then in your c++ code:
#include "cereal/cereal.hpp"
That should do it.
I have the following directory structure:
my_dir
|
--> src
| |
| --> foo.cc
| --> BUILD
|
--> WORKSPACE
|
--> bazel-out/ (symlink)
|
| ...
src/BUILD contains the following code:
cc_binary(
name = "foo",
srcs = ["foo.cc"]
)
The file foo.cc creates a file named bar.txt using the regular way with <fstream> utilities.
However, when I invoke Bazel with bazel run //src:foo the file bar.txt is created and placed in bazel-out/darwin-fastbuild/bin/src/foo.runfiles/foo/bar.txt instead of my_dir/src/bar.txt, where the original source is.
I tried adding an outs field to the foo rule, but Bazel complained that outs is not a recognized attribute for cc_binary.
I also thought of creating a filegroup rule, but there is no deps field where I can declare foo as a dependency for those files.
How can I make sure that the files generated by running the cc_binary rule are placed in my_dir/src/bar.txt instead of bazel-out/...?
Bazel doesn't allow you to modify the state of your workspace, by design.
The short answer is that you don't want the results of the past builds to modify the state of your workspace, hence potentially modifying the results of the future builds. It'll violate reproducibility if running Bazel multiple times on the same workspace results in different outputs.
Given your example: imagine calling bazel run //src:foo which inserts
#define true false
#define false true
at the top of the src/foo.cc. What happens if you call bazel run //src:foo again?
The long answer: https://docs.bazel.build/versions/master/rule-challenges.html#assumption-aim-for-correctness-throughput-ease-of-use-latency
Here's more information on the output directory: https://docs.bazel.build/versions/master/output_directories.html#documentation-of-the-current-bazel-output-directory-layout
There could be a workaround to use genrule. Below is an example that I use genrule to copy a file to the .git folder.
genrule(
name = "precommit",
srcs = glob(["git/**"]),
outs = ["precommit.txt"],
# folder contain this BUILD.bazel file is tool which will be symbol linked, we use cd -P to get to the physical path
cmd = "echo 'setup pre-commit.sh' > $(OUTS) && cd -P tools && ./path/to/your-script.sh",
local = 1, # required
)
If you're passing the name of the output file in when running, you can simply use absolute paths. To make this easier, you can use the realpath utility if you're in linux. If you're on a mac, it is included in brew install coreutils. Then running it looks something like:
bazel run my_app_dir:binary_target -- --output_file=`realpath relative/path/to.output
This has been discussed and explained in a Bazel issue. Recommendation is to use a tool external to Bazel:
As I understand the use-case, this is out-of-scope for building and in the scope of, perhaps, workspace configuration. What I'm sure of is that an external tool would be both easier and safer to write for this purpose, than to introduce such a deep design change to Bazel.
The tool would copy the files from the output tree into the source tree, and update a manifest file (also in the source tree) that lists the path-digest pairs. The sources and the manifest file would all be versioned. A genrule or a sh_test would depend on the file-generating genrules, as well as on this manifest file, and compare the file-generating genrules' outputs' digests (in the output tree) to those in the manifest file, and would fail if there's a mismatch. In that case the user would need to run the external tool, thus update the source tree and the manifest, then rerun the build, which is the same workflow as you described, except you'd run this tool instead of bazel regenerate-autogenerated-sources.
Hopefully a low-ball conceptual question. I'm having trouble understanding the modules option of r.js build configs. I want to build multiple modules with nested dependencies with one r.js build config.
Say I have the following project structure:
+---build
+---src
+---moduleOne
| +---moduleOne.js
| \---dependecyForModuleOne.js
+---moduleTwo
| +---moduleOne.js
| \---dependecyForModuleTwo.js
|---buildConfig.js
|---devModuleConfig.js
\---prodModuleConfig.js
devModuleConfig and prodModuleConfig are the dev and prod runtime configs, and buildConfig.js is the r.js build config.
Now, I can build moduleOne no problem using this config:
({
"baseUrl": "./",
"name": "moduleOne/moduleOne",
"out": "../build/moduleOneBundle.js",
mainConfigFile: 'devModuleConfig.js',
optimize: 'none'
})
I end up with a bundle in build that I can run after specifying different paths in the build config:
+---build
\---moduleOneBundle.js
I want to build two modules, so specify moduleOne using modules config option:
({
"baseUrl": "./",
modules: [
{
"name": "moduleOne/moduleOne",
"out": "../build/moduleOne.js"
}
],
dir:"../build", // <-- r.js says I need to add this. why?
mainConfigFile: 'devModuleConfig.js',
optimize: 'none'
})
As well as the required dir option, I get all my configs in the build dir, but I did not specify them, and I do not get my bundled module, and I get a text file containing r.js build output. In fact, build ends up looking exactly the same as src:
+---build
+---moduleOne
| +---moduleOne.js
| \---dependecyForModuleOne.js
+---moduleTwo
| +---moduleOne.js
| \---dependecyForModuleTwo.js
|---build.txt
|---buildConfig.js
|---devModuleConfig.js
\---prodModuleConfig.js
How do I configure multiple modules to build using one r.js config? I've read the docs a few times and can't get my head around it.
You can see my project that contains all this here: https://github.com/sennett/r.js-multiple-modules
When specifying modules, r.js includes all files then optimizes ones based on the rules specified in modules. removeCombined: true deletes modules from build that were combined into another module, and then manual deletion of the other files can be used to manually clean up any unwanted build artefacts.
I've recently picked up scons to implement a multi-platform build framework for a medium sized C++ project. The build generates a bunch of unit-tests which should be invoked at the end of it all. How does one achieve that sort of thing?
For example in my top level sconstruct, I have
subdirs=['list', 'of', 'my', 'subprojects']
for subdir in subdirs:
SConscript(dirs=subdir, exports='env', name='sconscript',
variant_dir=subdir+os.sep+'build'+os.sep+mode, duplicate=0)
Each of the subdir has its unit-tests, however, since there are dependencies between the dlls and executables built inside them - i want to hold the running of tests until all the subdirs have been built and installed (I mean, using env.Install).
Where should I write the loop to iterate through the built tests and execute them? I tried putting it just after this loop - but since scons doesn't let you control the order of execution - it gets executed well before I want it to.
Please help a scons newbie. :)
thanks,
SCons, like Make, uses a declarative method to solving the build problem. You don't want to tell SCons how to do its job. You want to document all the dependencies and then let SCons solve how it builds everything.
If something is being executed before something else, you need to create and hook up the dependencies.
If you want to create dmy touch files, you can create a custom builder like:
import time
def action(target, source, env):
os.system('echo here I am running other build')
dmy_fh = open('dmy_file','w')
dmy_fh.write( 'Dummy dependency file created at %4d.%02d.%02d %02dh%02dm%02ds\n'%time.localtime()[0:6])
dmy_fh.close()
bldr = Builder(action=action)
env.Append( BUILDERS = {'SubBuild' : bldr } )
env.SubBuild(srcs,tgts)
It is very important to put the timestamp into the dummy file, because scons uses md5 hashes. If you have an empty file, the md5 will always be the same and it may decide to not do subsequent build steps. If you need to generate different tweaks on a basic command, you can use function factories to modify a template. e.g.
def gen_a_echo_cmd_func(echo_str):
def cmd_func(target,source,env):
cmd = 'echo %s'%echo_str
print cmd
os.system(cmd)
return cmd_fun
bldr = Builder(action = gen_a_echo_cmd_func('hi'))
env.Append(BUILDERS = {'Hi': bldr})
env.Hi(srcs,tgts)
bldr = Builder(action = gen_a_echo_cmd_func('bye'))
env.Append(BUILDERS = {'Bye': bldr})
env.Bye(srcs,tgts)
If you have something that you want to automatically inject into the scons build flow ( e.g. something that compresses all your build log files after everything else has run ), see my question here.
The solution should be as simple as this.
Make the result of the Test builders depend on the result of the Install builder
In pseudo:
test = Test(dlls)
result = Install(dlls)
Depends(test,result)
The best way would be if the Test builder actually worked out the dll dependencies for you, but there may be all kinds of reasons it doesn't do that.
In terms of dependencies, what you want is for all the test actions to depend on all the program-built actions. A way of doing this is to create and export a dummy-target to all the subdirectories' sconscript files, and in the sconscript files, make the dummy-target Depends on the main targets, and have the test targets Depends on the dummy-target.
I'm having a bit of trouble figuring out how to set up the dummy target, but this basically works:
(in top-level SConstruct)
dummy = env.Command('.all_built', 'SConstruct', 'echo Targets built. > $TARGET')
Export('dummy')
(in each sub-directory's SConscript)
Import('dummy')
for target in target_list:
Depends(dummy, targe)
for test in test_list:
Depends(test, dummy)
I'm sure further refinements are possible, but maybe this'll get you started.
EDIT: also worth pointing out this page on the subject.
Just have each SConscript return a value on which you will build dependencies.
SConscript file:
test = debug_environment.Program('myTest', src_files)
Return('test')
SConstruct file:
dep1 = SConscript([...])
dep2 = SConscript([...])
Depends(dep1, dep2)
Now dep1 build will complete after dep2 build has completed.