In my pde build I'm using pluginPath property to resolve dependencies from local p2 repositories for example:
DpluginPath=${basedir}/../../../plugins:/cache/3pp/site/mockito/1.8.2:/cache/3pp/site/spring/3.0.1
I'm trying to find how to effectively materialize caches from HTTP p2 to local files.
I know that I could use ant-contrib for looping and invoke p2.mirror task for each dependency. Specially important is for me is minimalizing network overhead - to keep builds fast.
But, is the better way to declare dependency and materialize p2 repositories on local filesystem?
To feed p2 repositories into your build, you should place all of your repos in repoBaseLocation. Then by default transformedRepoLocation will be the runnable repo that is consumed by your build, and you shouldn't need to play with pluginPath. See Reusing Metadata.
How you get your repos into the repoBaseLocation is then up to you. You could mirror stable repos into a common known location (a target directory outside of your current build directory) if they don't exist already, and have your build copy them into each repoBaseLocation per build.
Related
I'm currently messing around with Google's Mediapipe, which uses Bazel as a build tool. The folder has the following structure:
mediapipe
├ mediapipe
| └ examples
| └ desktop
| └ hand_tracking
| └ BUILD
├ calculators
| └ tensor
| └ tensor_to_landmarks_calculator.cc
| └ BUILD
└ WORKSPACE
There are a bunch of other files in there as well, but they are rather irrelevant to this problem. They can be found in the git repo linked above if you need them.
I'm at a stage where I can build and run the hand_tracking example without any problems. Now, I want to include the cereal library to the build, so that I can use #include <cereal/archives/binary.hpp> from within tensors_to_landmarks_calculator.cc. The cereal library is located at C:\\cereal, but can be moved to other locations if it simplifies the process.
Basically, I'm looking for the Bazel equivalent of adding a path to Additional Include Directories in Visual Studio.
How would I need to modify the WORKSPACE and BUILD files in order to include the library in my project, assuming they are in a default state?
Unfortunately, this official doc page only covers one-file libraries, and other implementations kept giving me File could not be found errors at build time.
Thanks in advance!
First you have to tell Bazel about the code living "outside" the
workspace area. It needs to know how to find it, how to build it, and
what to call it, etc. These are known as remote repositories. They
can be local to your disk (outside the Bazel workspace area), or
actually remote on another machine or server, like github. The
important thing is it must be described to Bazel with enough
information that it can use.
As most third party code does not come with BUILD.bazel files, you may
need to provide one yourself and tell Bazel "use this as if it was a
build file found in that code."
For a local directory outside your bazel project
Add a repository rule like this to your WORKSPACE file:
# This could go in your WORKSPACE file
# (But prefer the http_archive solution below)
new_local_repository(
name = "cereal",
build_file = "//third_party:cereal.BUILD.bazel",
path = "<path-to-directory>",
)
("new_local_repository" is built-in to bazel)
Somewhere under your Bazel WORKSPACE area you'll also need to make a
cereal.BUILD.bazel file and export it from the package. I choose a directory called //third_party, but you can put it anywhere
else, and name it anything else, as long as the repository rule
provides a proper bazel label for it.) The contents might look like
this:
# contents of //third_party/cereal.BUILD.bazel
cc_library(
name = "cereal-lib",
srcs = glob(["**/*.hpp"]),
includes = ["include"],
visibility = ["//visibility:public"],
)
Bazel will pretend this was the BUILD file that "came with" the remote
repository, even though it's actually local to your repo. When Bazel fetches this remote repostiory code it copies it, and the BUILD file you provide, into its external area for caching, building, etc.
To make //third_party:cereal.BUILD.bazel a valid target in your directory, add a BUILD.bazel file to that directory:
# contents of //third_party/BUILD.bazel
exports_files = [
"cereal.BUILD.bazel",
]
Without exporting it, you won't be able to refer to the buildfile from your repository rule.
Local disk repositories aren't very portable since people may have
different versions installed and it's not very hermetic (making it
hard to share caches of builds with others), and it requires they put
them in the same place, and that kind of setup can be problematic. It
also will fail when you mix operating systems, etc, if you refer to it as "C:..."
Downloading a tarball of the library from github, for example
A better way is to download a fixed version from github, for example,
and let Bazel manage it for you in its external area:
http_archive(
name = "cereal",
sha256 = "329ea3e3130b026c03a4acc50e168e7daff4e6e661bc6a7dfec0d77b570851d5",
urls =
["https://github.com/USCiLab/cereal/archive/refs/tags/v1.3.0.tar.gz"],
build_file = "//third_party:cereal.BUILD.bazel",
)
The sha256 is important, since it downloads and computes it, compares to what you specified, and can cache it. In the future, it won't re-download it if the local file's sha matches.
Notice, it again says build_file = //third_party:cereal.BUILD.bazel., all
the same things from new_local_repository above apply here. Make sure you provide the build file for it to use, and export it from where you put it.
*To test that the remote repository is setup ok
on the command line issue
bazel fetch #cereal//:cereal-lib
I sometimes have to clear it out to make it try again, if my rule isn't quite right, but the "bad" version sticks around.
bazel clean --expunge
will remove it, but might be overkill.
Finally
We have:
defined a remote repository called #cereal
defined a target in it called cereal-lib
the target is thus #cereal//:cereal-lib
To use it
Go to the package where you would like to include cereal, and add a
dependency on this repository to the rule that builds the c++ code that would like to use cereal. That is, in your case, the BUILD rule that causes tensor_to_landmarks_calculator.cc to get built, add:
deps = [
"#cereal//:cereal-lib",
...
]
And then in your c++ code:
#include "cereal/cereal.hpp"
That should do it.
spago docs state:
packages.dhall: this file is meant to contain the totality of the packages available to your project (that is, any package you might want to import).
In practice it pulls in the official package-set as a base, and you are then able to add any package that might not be in the package set, or override existing ones.
spago.dhall: this is your project configuration. It includes the above package set, the list of your dependencies, the source paths that will be used to build, and any other project-wide setting that spago will use. (my emphasis)
Why do both files have the notion/concept of dependencies? Example: packages.dhall and spago.dhall from the ebook.
spago.dhall dependencies can be found in the project .spago folder. But I cannot locate the ones from packages.dhall. Others are common like aff. A different perspective:
[...] what you choose is a "snapshot", which is a collection of certain versions of all available packages that are guaranteed to compile and work together.
The snapshot is defined in your packages.dhall file, and then you specify the specific packages that you want to use in spago.dhall. The version for each package comes from the snapshot.
That sounds, like spago.dhall is an excerpt of packages from packages.dhall.The note about versions is a bit confusing, as there aren't version specifiers in both files.
So, why two files? What is the mental model for someone coming from npm ecosystem with package.json (which might be present as well)?
The mental model is that of a Haskell developer, which is what most PureScript developers used to be, and many still are. :-)
But more seriously, the mental model is having multiple "projects" in a "solution", which is the model of Haskell's de-facto standard package manager, Stack. In Haskell this situation is very common, in PureScript - much less so, but still not unheard of.
In a situation like this it's usually beneficial to have all the "projects" to share a common set of packages, which are all guaranteed to be "compatible" with each other, which simply means that they all compile together and their tests pass. In Haskell Stack this common set of packages is defined in stack.yaml. In Spago - it's packages.dhall.
Once you have this common base set of packages established, each individual project may pick and choose the particular packages that it uses. In Haskell Stack this is specified either in package.yaml or in <project-name>.cabal (the latter being phased out). In Spago - it's spago.dhall.
But of course, when you have just the one project, having both packages.dhall to establish the "base set" of packages and then, separately, spago.dhall to pick some particular packages from that set - may seem a bit redundant. And indeed, it's possible to do without the packages.dhall file completely: just specify the URL of the package set directly in spago.dhall, as the value of the packages property:
{ name = "my-project"
, dependencies = [ ... ]
, license = "..."
, packages = https://github.com/purescript/package-sets/releases/download/psc-0.13.8-20201223/packages.dhall
, repository = "..."
, sources = [ "src/**/*.purs" ]
}
This will work, but there is one important caveat: hashing. When the URL of the package set is specified in packages.dhall, running spago install will compute a hash of that package set and put it inside packages.dhall, right next to the URL. Here's what mine looks like:
let upstream =
https://github.com/purescript/package-sets/releases/download/psc-0.13.8-20201222/packages.dhall sha256:620d0e4090cf1216b3bcbe7dd070b981a9f5578c38e810bbd71ece1794bfe13b
Then, if maintainers of the package set become evil and change the contents of that file, Spago will be able to notice that, recompute the hash, and reinstall the packages.
If you put the URL directly in spago.dhall, this doesn't happen, and you're left with the slight possibility of your dependencies getting out of sync.
Now to address this point separately:
Why do both files have the notion/concept of dependencies? Example: packages.dhall and spago.dhall from the ebook.
If you look closer at the examples you linked, you'll see that these are not the same dependencies. The ones in spago.dhall are dependencies of your package - the one where spago.dhall lives.
But dependencies in packages.dhall are dependencies of the test-unit package, which is being added to the package set as an override, presumably because we want to use the special version stackless-default, which isn't present in the official package set. When you override a package like this, you can override any fields specified in that package's own spago.dhall, and in this case we're overriding dependencies, repo, and version.
I work with a massive codebase distributed across many repositories and using even more third-party dependencies. The goal is to make the build hermetic and I contemplate using Bazel to achieve it. On the one hand, Bazel has git_repository rule to refer to the external repos in the WORKSPACE file. On the other hand, WORKSPACE files are not loaded recursively, so to get to indirect dependencies I need to build all inclusive WORKSPACE file somehow. I wonder if somebody already tackled that problem using Bazel or some other existing tools. Is there a way to expand the WORKSPACE as part of the build? May be WORKSPACE can #include other (generated) files?
WORKSPACE files can load and then call macros, which gives similar functionality to #include.
A common pattern is each project having a macro which calls macros (for dependencies on other projects) and creates *_archive rules (for dependencies directly on files to download) so it builds. For example, protobuf has protobuf_deps to implement this pattern. If you create a repository with protobuf (using git_repository, or http_archive, or any of the other repository rules), then you can load that macro and call it, and you'll automatically get all the transitive dependencies.
For example (from Chromium):
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
# This com_google_protobuf repository is required for proto_library rule.
# It provides the protocol compiler binary (i.e., protoc).
http_archive(
name = "com_google_protobuf",
strip_prefix = "protobuf-master",
urls = ["https://github.com/protocolbuffers/protobuf/archive/master.zip"],
)
load("#com_google_protobuf//:protobuf_deps.bzl", "protobuf_deps")
protobuf_deps()
I'm showing http_archive because it's easier to work with, but you can easily change it to git_archive if you want.
Another common pattern which makes this all work is the way protobuf_deps checks native.existing_rule before creating each http_archive. That allows you to instantiate a specific version (or from a specific source, etc) of the dependency directly in your WORKSPACE file to override the one protobuf would otherwise bring in.
How I can build local sources and dependancies with flatpak-builder?
I can build local sources
flatpak build ../dictionary ./configure --prefix=/app
I can extract and build application with dependancies with a .json
flatpak-builder --repo=repo dictionary2 org.gnome.Dictionary.json
But no way to build dependancies and local sources? I don't find sources type
like dir or other, only archive, git (no hg?) ...
flatpak-builder is meant to automate the whole build process, with a single entry-point: the JSON manifest.
Everything else it obtains from Git, Bazaar or tarballs. Note that for these the "url" property may be a local URL starting with file://.
(There is indeed no support for Hg. If that's important for you, feel free to request it.)
In addition to that, there are a few more source types (see the flatpak-manifest(5) manpage), which can be used to modify the extracted sources:
file which point to a local file to copy somewhere in the extracted sources;
patch which point to a local patch file to apply to the extracted sources;
script which creates a script in the extracted sources, from an array of commands;
shell which modifies the extracted sources by running an array of commands;
Adding a dir source type might be useful.
However (and I only flatpaked a few apps, and contributed 2 or 3 patches to the code, so I might be completely wrong) care must be taken as this would easily make builds completely unreproducible, which is one thing flatpak-builder tries very hard to enable.
For example, when using a local file source, flatpak-builder will base64-econde the content of that file and use it as a data:text/plain;charset=utf8;base64,<content> URL for the file which it stores in the manifest included inside the final build.
Something similar might be needed for a dir source (tar the folder then base64-encode the content of the tar?), otherwise it would be impossible to reproduce the build. I've just been told (after submitting this answer) that this changed in Git master, in favour of a new flatpak-builder --bundle-sources option. This would probably make it easier to support reproducible builds with a dir source type.
In any case, feel free to start the conversation around a new dir source type in the upstream bug tracker. :)
There's a expermental cli tool if you want to use it https://gitlab.com/csoriano/flatpak-dev-cli
You can read the docs
http://docs.flatpak.org/en/latest/building-simple-apps.html
http://docs.flatpak.org/en/latest/flatpak-builder.html
In a nutshell this is what you need to use flatpak as develop workbench
https://github.com/albfan/gnome-builder/wiki/flatpak
When creating a vNext build on TFS 2015 you can define variables, which are then used in build steps, and can also be used as environment variables in scripts the build runs.
The build I am working on runs scripts that pulls files from mapped locations, so it would be great if I could define a variable and use it in a mapping so that for example, if I update a reference in the project the build is building, I can simply update the variable with the new location and have the repository mappings and scripts all pull correctly from the new location without having to make the change in multiple places.
I have tried doing this by setting up the variable and mapping as follows,
But this generates an error when you try to save the build complaining that there are two '$' characters in the mapping. Is there way to do this or is this not currently possible?
This has been causing me havok for quite a while as well.
For starters, there is a uservoice request for this feature. You can add your votes and input here to get Microsoft to allow this feature: https://visualstudio.uservoice.com/forums/330519-team-services/suggestions/14131002-allow-variables-in-repository-variables-and-trigg
Second, we've developed a workaround that gets us most of the way there. It's not perfect, but it might be useful to you if you're comfortable with the tradeoffs or can work around the deficiencies.
Start by turning off the "Label Sources" option of the build and mapping the Server Path field to you base build. You'll want to add a custom variable to the Build Definition to tell the build instance what TFS location to pull from. For example, we have a base project and then multiple branches from the project, so our source is structured like this
$\Team Project\Project1
$\Team Project\Project1Branch1
$\Team Project\Project1Branch2
$\Team Project\Project1Branch3
and we create a variable named "Branch" that we can set to "Branch1", "Branch2", and so forth.
When we want to build the base project, we leave the Branch variable blank when launching the build. For branch builds, we set it to the name of the branch we want to build.
Then our build steps look like this
Remap Workspace Folder to Branch Folder
Get Files for Specified Branch - We have to do this manually after
remapping our workspace
Compile the Source in the Specified Branch
Publish Build Artifacts from the Specified Branch
Label the Code of the Specified Branch Manually
The Remap task runs the command
tf workfold "$/Team Project/Project1$(Branch)" "$(build.sourcesDirectory)\$(Build.DefinitionName)$(Branch)"
The Manual Get task runs the following command
get /recursive /noprompt "$/Team Project/Project1$(Branch)"
The build uses the Branch variable to point to the correct location of the solution file for the specified branch
$(build.sourcesDirectory)\$(Build.DefinitionName)$(Branch)\SolutionFile.sln
The Publish Artifacts task uses the Branch variable in both the Contents field and the Path field
Example in Contents
**\$(Build.DefinitionName)$(Branch)\bin
The Label Code task uses the following command
tf label "$(build.buildNumber)" "$/Team Project/Project1$(Branch)" /recursive
The downside of this setup is that you don't capture Associated Changes and Work Items to your subsidiary branches as the Server Path field is always set to the main location. This may not be an issue if you always merge from your branches to your main location prior to launching a build meant to go to production. What you can do to compensate for this really depends on your use case.
With some tweaking, you could use this same format to specify full paths as well if you needed to.
It's impossible. Just as the error message mentioned: there are two '$' characters in the mapping. Which means your application's path shouldn't vary from build to build.
Mappings on the Repository page are used to specify source control
folder which contains projects that need to be built in the build
definition. You can set it via clicking the Ellipsis (...) button,
however, you can't include variables in the mapping path.
There is a similar question: Variables in TFS Mappings on Visual Studio Online Team Builds