I have a Module A, that includes some sort of hardware specific header file Io.h. Now I have a different Module B, that depends on A, but wants to replace the header file Io.h that is needed by Module A with a stubbed one, located in the package directory B.
Is there any way to "overwrite" the dependency of A onto Io.h to use the one given by Module B?
WORKSPACE
|
|-A
|--A.c
|--Io.h
|-B
|--B.c
|--Io.h
I understand, that in this case I should probably generate two targets of A, once for depending on the local Io.h and one for depending on the Io.h from B. However, if the dependency onto Io.h is nested deeply in the dependencys of A instead of being included directly, I would have to change every intermediate dependency as well. That is nothing I want to do or can do.
I solved this problem by using a select function in the target rule of B:
If a specific config setting is fulfilled, the Io.h gets replaced:
In A/BUILD
cc_library(
name="StubLib",
hdrs=["Io.h"],
visibility=["["//visibility:public"],
)
In B/BUILD
cc_library(
name="B",
srcs=["B.c"],
hdrs=["B.c"],
deps=select({
"//conditions:default":[":io_h"],
"//custom_config":["//A:StubLib"],
}),
visibility=["["//visibility:public"],
)
cc_library(
name="io_h",
hdrs=["Io.h"],
)
Related
I'm currently messing around with Google's Mediapipe, which uses Bazel as a build tool. The folder has the following structure:
mediapipe
├ mediapipe
| └ examples
| └ desktop
| └ hand_tracking
| └ BUILD
├ calculators
| └ tensor
| └ tensor_to_landmarks_calculator.cc
| └ BUILD
└ WORKSPACE
There are a bunch of other files in there as well, but they are rather irrelevant to this problem. They can be found in the git repo linked above if you need them.
I'm at a stage where I can build and run the hand_tracking example without any problems. Now, I want to include the cereal library to the build, so that I can use #include <cereal/archives/binary.hpp> from within tensors_to_landmarks_calculator.cc. The cereal library is located at C:\\cereal, but can be moved to other locations if it simplifies the process.
Basically, I'm looking for the Bazel equivalent of adding a path to Additional Include Directories in Visual Studio.
How would I need to modify the WORKSPACE and BUILD files in order to include the library in my project, assuming they are in a default state?
Unfortunately, this official doc page only covers one-file libraries, and other implementations kept giving me File could not be found errors at build time.
Thanks in advance!
First you have to tell Bazel about the code living "outside" the
workspace area. It needs to know how to find it, how to build it, and
what to call it, etc. These are known as remote repositories. They
can be local to your disk (outside the Bazel workspace area), or
actually remote on another machine or server, like github. The
important thing is it must be described to Bazel with enough
information that it can use.
As most third party code does not come with BUILD.bazel files, you may
need to provide one yourself and tell Bazel "use this as if it was a
build file found in that code."
For a local directory outside your bazel project
Add a repository rule like this to your WORKSPACE file:
# This could go in your WORKSPACE file
# (But prefer the http_archive solution below)
new_local_repository(
name = "cereal",
build_file = "//third_party:cereal.BUILD.bazel",
path = "<path-to-directory>",
)
("new_local_repository" is built-in to bazel)
Somewhere under your Bazel WORKSPACE area you'll also need to make a
cereal.BUILD.bazel file and export it from the package. I choose a directory called //third_party, but you can put it anywhere
else, and name it anything else, as long as the repository rule
provides a proper bazel label for it.) The contents might look like
this:
# contents of //third_party/cereal.BUILD.bazel
cc_library(
name = "cereal-lib",
srcs = glob(["**/*.hpp"]),
includes = ["include"],
visibility = ["//visibility:public"],
)
Bazel will pretend this was the BUILD file that "came with" the remote
repository, even though it's actually local to your repo. When Bazel fetches this remote repostiory code it copies it, and the BUILD file you provide, into its external area for caching, building, etc.
To make //third_party:cereal.BUILD.bazel a valid target in your directory, add a BUILD.bazel file to that directory:
# contents of //third_party/BUILD.bazel
exports_files = [
"cereal.BUILD.bazel",
]
Without exporting it, you won't be able to refer to the buildfile from your repository rule.
Local disk repositories aren't very portable since people may have
different versions installed and it's not very hermetic (making it
hard to share caches of builds with others), and it requires they put
them in the same place, and that kind of setup can be problematic. It
also will fail when you mix operating systems, etc, if you refer to it as "C:..."
Downloading a tarball of the library from github, for example
A better way is to download a fixed version from github, for example,
and let Bazel manage it for you in its external area:
http_archive(
name = "cereal",
sha256 = "329ea3e3130b026c03a4acc50e168e7daff4e6e661bc6a7dfec0d77b570851d5",
urls =
["https://github.com/USCiLab/cereal/archive/refs/tags/v1.3.0.tar.gz"],
build_file = "//third_party:cereal.BUILD.bazel",
)
The sha256 is important, since it downloads and computes it, compares to what you specified, and can cache it. In the future, it won't re-download it if the local file's sha matches.
Notice, it again says build_file = //third_party:cereal.BUILD.bazel., all
the same things from new_local_repository above apply here. Make sure you provide the build file for it to use, and export it from where you put it.
*To test that the remote repository is setup ok
on the command line issue
bazel fetch #cereal//:cereal-lib
I sometimes have to clear it out to make it try again, if my rule isn't quite right, but the "bad" version sticks around.
bazel clean --expunge
will remove it, but might be overkill.
Finally
We have:
defined a remote repository called #cereal
defined a target in it called cereal-lib
the target is thus #cereal//:cereal-lib
To use it
Go to the package where you would like to include cereal, and add a
dependency on this repository to the rule that builds the c++ code that would like to use cereal. That is, in your case, the BUILD rule that causes tensor_to_landmarks_calculator.cc to get built, add:
deps = [
"#cereal//:cereal-lib",
...
]
And then in your c++ code:
#include "cereal/cereal.hpp"
That should do it.
I'm writing a large OCaml project. I wrote a file foo.ml, which works perfectly. In a subdirectory of foo.ml's directory, there is a file bar.ml.
bar.ml references code in foo.ml, so its opening line is:
open Foo
This gives me an error at compile time:
Unbound module Foo.
What can I do to fix this without changing the location of foo.ml?
The easy path is to use one of OCaml build system like ocamlbuild or oasis. Another option would be jbuilder but jbuilder is quite opiniated about file organization and does not allow for the kind of subdirectory structure that you are asking for.
The more explicit path comes with a warning: OCaml build process is complicated with many moving parts that can be hard to deal with.
After this customary warning, when looking for modules, OCaml compiler first looks for module in the current compilation environment, then looks for compiled interface ".cmi" files in the directories specified by the "-I" option flags (plus the current directory and the standard library directory).
Thus in order to compile your bar.ml file, you will need to add the parent directory in the list of included directories with the -I .. option.
After all this, you will discover that during the linking phase, all object files (i.e. .cmo or .cmx) need to be listed in a topological order compatible with the dependency graph of your project.
Consequently, let me repeat my advice: use a proper build system.
If I want to package a subproject's jar inside the main jar, I can do this sort of thing:
define 'library' do
project.version = '0.1'
define 'subproject' do
package :jar
end
package(:jar).include(project('subproject').package(:jar),
as: 'org/acme/library/subproject.jar')
end
This will lazily build the jar in the subproject, right before it is needed for packaging into the main jar, and everything works.
Problem is, my tests want to use the jar file as well, so the logical place for it is in the resources. So I wonder how I'm supposed to copy something build by another project into the resources for this project. The docs are notably lacking any examples of this and mostly focus on how to copy one directory of files to another.
This is what I tried:
resources.from(project('subproject').package(:jar),
as: 'org/acme/library/subproject.jar')
It fails:
RuntimeError : Source directory $HOME/Documents/library/subproject/target/library-subproject-0.1.jar doesn't exist
$HOME/Documents/library/buildfile:38:in `block in <top (required)>'
To my surprise, this seems to be the one place in buildr which eagerly evaluates the existence of a build product instead of setting it up as a lazy dependency...
I can work around this as follows:
# Crappy workaround to eagerly create target dir in subproject.
mkdir_p project("lucene#{ver}").path_to(:target)
resources.from(project("lucene#{ver}").path_to(:target)).
include(project("lucene#{ver}").package(:jar))
I don't like this because it still eagerly evaluates the directory, which forces me to create that directory long before any of the build is being run. Meaning that even when I run buildr clean, it's creating this directory. Yuck.
So what's the proper way to do this?
The way we typically do this is to not create any packages with the top level project and instead use a subproject to define the two packages. i.e
define 'myproject'
...
define 'model' do
...
package(:jar)
end
define 'server' do
...
package(:war) do |war|
war.libs.clear
war.libs << project('model').package(:jar)
end
end
end
This allows much easier management of dependencies and ordering of builds. Hope that helps!
I would like to give two .ml sources files the same name in different directories in my source tree, but the OCaml documentation states that the a file A.ml is exported as a toplevel module A = struct ... end. If I have two files X/A.ml and Y/A.ml, how can I refer to them both from B.ml?
Modules can contain modules, i.e. you can have a hierarchy of modules.
From the B.ml point of view, you can see two modules named X.A and Y.A .
They can even both have a function named foo, those functions would be seen as X.A.foo and Y.A.foo .
Beware that if you open both modules X and Y, the module A from Y will hide the module A from X.
That was from the namespace point of view. Now, about the source tree.
One way would be to have those files:
x.ml
X/a.ml
y.ml
y/a.ml
The file x.ml is automatically generated and contains just this:
module A = struct
(*The contents of x/a.ml is included here*)
end
Likewise for y.ml
There are several preprocessors able to include a file: cpp, camlp4, camlp5, camlmix...
This set of automatically generated files (and regenerated each time the source changes) is not very satisfiying, I will look at other answers.
You can also have a look at ocamlc -pack, but when I tried it a long time ago there was a problem with ocamldoc unable to have x/a.ml and y/a.ml . So check this before you settle on a tool.
You cannot link modules with the same name into the same program. For example, extensions to the standard library, such as Batteries and Core, are forced to give standard modules a different name. In Batteries, the List module is called BatList. Then, they provide a wrapper module Batteries, within which the module is renamed with by doing module List = BatList. The overall path to this module is Batteries.List, so there is no clash with the Standard Library's top level List. Finally, the recommended way of using Batteries and Core is to do open Batteries and open Core, thereby giving you access to their additional list functions under the module name List.
Thus the only option is to rename your modules, but you can do this in two ways:
Change the base names of the modules, e.g. call them A and B.
Put the modules under another module, e.g. name them X.A and Y.A. If you want to keep your current directory structure, you can use OCaml's -pack option. Personally, I find this option too restrictive and always end up doing manual packing, i.e. the technique described above used by Batteries and Core.
I've devised a good structure for a new project which I currently successfully use.
NOTE: This structure is beneficial for the project which is developed by multiple programmers.
<Project Directory>
+ include
+ Haroogan
+ Utility
- util.h
+ More
- more.h
+ Whatever
- whatever.h
+ src
+ Haroogan.More <--- Artifact
- more.h
- more.cpp
- CMakeLists.txt
+ Haroogan.More.Whatever <--- Artifact
- whatever.h
- whatever.cpp
- CMakeLists.txt
+ Haroogan.Utility <--- Artifact
- util.h
- util.cpp
- CMakeLists.txt
+ Haroogan.Application <--- Artifact
- app.h
- app.cpp
- CMakeLists.txt
- CMakeLists.txt <--- "Root" CMakeLists.txt
Artifact - is library, executable, etc. - you got the point.
Artifact-Name (as in the subject) - is simply the name of the artifact which is deduced by CMake from the name of the directory dedicated to the artifact. Actually terms Artifact and Artifact-Name are essentially the same. For example: artifact residing in directory "Haroogan.More.Whatever" has name "Haroogan.More.Whatever".
This has several consequences:
library, executable, etc. produced after a build will be named with Artifact-Name;
all source code pertaining to a particular artifact is enclosed in a namespace corresponding to the Artifact-Name. For instance: artifact "Haroogan.More.Whatever" imposes "Haroogan::More::Whatever" namespace on all its sources;
when one artifact wants to use another one, then one has to include another one's headers and optionally link against it. However, we all know writing #include "../Haroogan.More/more.h" not only looks messy, but also breaks the basic idea that artifacts actually represent stand-alone components which have to be decoupled even in terms of file-system. Moreover, the concept of private headers is also broken, because this way I can access any headers inside other artifacts.
What we need here is simply a public header repository - the "include" directory. Therefore, to address the last issue - I decided to do the following:
each artifact decides (in its CMakeLists.txt of course) on its own which headers it wants to export to the outside world;
then it copies (on every build, but only when needed of course) these header files to the corresponding directory inside "include". For instance, if Artifact-Name is "Haroogan.More.Whatever" then headers will be copied to "include/Haroogan/More/Whatever/" directory (as shown above).
I believe it's a nice and robust approach since now if I want to use "whatever" and "more" classes from "Haroogan.More.Whatever" and "Haroogan.More" artifacts in other components - I simply write:
#include <Haroogan/More/Whatever/whatever.h>
#include <Haroogan/More/more.h>
using Haroogan::More::Whatever::whatever;
using Haroogan::More::more;
The system works like a charm (I can provide CMake scripts if anybody wants). However, I'm not satisfied with the fact that headers are copied. It would be much better if, for example, instead of copying "whatever.h" CMake would create new file "whatever.h" in "Haroogan/More/Whatever/" and inject #include "../../../../src/Haroogan.More.Whatever/whatever.h" in it.
My system right now it fully automated. In other words, path "Haroogan/More/Whatever" is automatically deduced from Artifact-Name "Haroogan.More.Whatever". Therefore, it would be great if injection of #include "../../../../src/Haroogan.More.Whatever/whatever.h" with all those nasty ../../ would be also automated.
Unfortunately, I'm new to CMake and don't know how to achieve this functionality, but I think it is possible and might be already done by someone. Thank you.
EDIT:
A temporary solution for this problem could be the following:
Instead of creating "whatever.h" inside "Haroogan/More/Whatever/" which immediately leads to dealing with ../../ mess, I could simply create "Haroogan.More.Whatever.whatever.h" (by prefixing "whatever.h" with "Haroogan.More.Whatever") right in the "include" directory and use it as:
#include <Haroogan.More.Whatever.whatever.h>
using Haroogan::More::Whatever::whatever;
This solution is acceptable, but I don't like it as much as the one I'm interested in.
How about this:
macro(add_public_headers)
foreach(header ${ARGN})
get_filename_component(abspath ${header} ABSOLUTE)
file(WRITE ${CMAKE_CURRENT_BINARY_DIR}/${header} "#include ${abspath}"
endforeach()
endmacro()
This maro can be used now in such way:
In src/Haroogan.More.Whatever/CMakeLists.txt you do
add_public_headers(whatever.h)
and this will generate header with single #include line in your build dir.
The only thing is that path would be absolute, but it shouldn't be problem for you.