spago docs state:
packages.dhall: this file is meant to contain the totality of the packages available to your project (that is, any package you might want to import).
In practice it pulls in the official package-set as a base, and you are then able to add any package that might not be in the package set, or override existing ones.
spago.dhall: this is your project configuration. It includes the above package set, the list of your dependencies, the source paths that will be used to build, and any other project-wide setting that spago will use. (my emphasis)
Why do both files have the notion/concept of dependencies? Example: packages.dhall and spago.dhall from the ebook.
spago.dhall dependencies can be found in the project .spago folder. But I cannot locate the ones from packages.dhall. Others are common like aff. A different perspective:
[...] what you choose is a "snapshot", which is a collection of certain versions of all available packages that are guaranteed to compile and work together.
The snapshot is defined in your packages.dhall file, and then you specify the specific packages that you want to use in spago.dhall. The version for each package comes from the snapshot.
That sounds, like spago.dhall is an excerpt of packages from packages.dhall.The note about versions is a bit confusing, as there aren't version specifiers in both files.
So, why two files? What is the mental model for someone coming from npm ecosystem with package.json (which might be present as well)?
The mental model is that of a Haskell developer, which is what most PureScript developers used to be, and many still are. :-)
But more seriously, the mental model is having multiple "projects" in a "solution", which is the model of Haskell's de-facto standard package manager, Stack. In Haskell this situation is very common, in PureScript - much less so, but still not unheard of.
In a situation like this it's usually beneficial to have all the "projects" to share a common set of packages, which are all guaranteed to be "compatible" with each other, which simply means that they all compile together and their tests pass. In Haskell Stack this common set of packages is defined in stack.yaml. In Spago - it's packages.dhall.
Once you have this common base set of packages established, each individual project may pick and choose the particular packages that it uses. In Haskell Stack this is specified either in package.yaml or in <project-name>.cabal (the latter being phased out). In Spago - it's spago.dhall.
But of course, when you have just the one project, having both packages.dhall to establish the "base set" of packages and then, separately, spago.dhall to pick some particular packages from that set - may seem a bit redundant. And indeed, it's possible to do without the packages.dhall file completely: just specify the URL of the package set directly in spago.dhall, as the value of the packages property:
{ name = "my-project"
, dependencies = [ ... ]
, license = "..."
, packages = https://github.com/purescript/package-sets/releases/download/psc-0.13.8-20201223/packages.dhall
, repository = "..."
, sources = [ "src/**/*.purs" ]
}
This will work, but there is one important caveat: hashing. When the URL of the package set is specified in packages.dhall, running spago install will compute a hash of that package set and put it inside packages.dhall, right next to the URL. Here's what mine looks like:
let upstream =
https://github.com/purescript/package-sets/releases/download/psc-0.13.8-20201222/packages.dhall sha256:620d0e4090cf1216b3bcbe7dd070b981a9f5578c38e810bbd71ece1794bfe13b
Then, if maintainers of the package set become evil and change the contents of that file, Spago will be able to notice that, recompute the hash, and reinstall the packages.
If you put the URL directly in spago.dhall, this doesn't happen, and you're left with the slight possibility of your dependencies getting out of sync.
Now to address this point separately:
Why do both files have the notion/concept of dependencies? Example: packages.dhall and spago.dhall from the ebook.
If you look closer at the examples you linked, you'll see that these are not the same dependencies. The ones in spago.dhall are dependencies of your package - the one where spago.dhall lives.
But dependencies in packages.dhall are dependencies of the test-unit package, which is being added to the package set as an override, presumably because we want to use the special version stackless-default, which isn't present in the official package set. When you override a package like this, you can override any fields specified in that package's own spago.dhall, and in this case we're overriding dependencies, repo, and version.
Related
I'm currently messing around with Google's Mediapipe, which uses Bazel as a build tool. The folder has the following structure:
mediapipe
├ mediapipe
| └ examples
| └ desktop
| └ hand_tracking
| └ BUILD
├ calculators
| └ tensor
| └ tensor_to_landmarks_calculator.cc
| └ BUILD
└ WORKSPACE
There are a bunch of other files in there as well, but they are rather irrelevant to this problem. They can be found in the git repo linked above if you need them.
I'm at a stage where I can build and run the hand_tracking example without any problems. Now, I want to include the cereal library to the build, so that I can use #include <cereal/archives/binary.hpp> from within tensors_to_landmarks_calculator.cc. The cereal library is located at C:\\cereal, but can be moved to other locations if it simplifies the process.
Basically, I'm looking for the Bazel equivalent of adding a path to Additional Include Directories in Visual Studio.
How would I need to modify the WORKSPACE and BUILD files in order to include the library in my project, assuming they are in a default state?
Unfortunately, this official doc page only covers one-file libraries, and other implementations kept giving me File could not be found errors at build time.
Thanks in advance!
First you have to tell Bazel about the code living "outside" the
workspace area. It needs to know how to find it, how to build it, and
what to call it, etc. These are known as remote repositories. They
can be local to your disk (outside the Bazel workspace area), or
actually remote on another machine or server, like github. The
important thing is it must be described to Bazel with enough
information that it can use.
As most third party code does not come with BUILD.bazel files, you may
need to provide one yourself and tell Bazel "use this as if it was a
build file found in that code."
For a local directory outside your bazel project
Add a repository rule like this to your WORKSPACE file:
# This could go in your WORKSPACE file
# (But prefer the http_archive solution below)
new_local_repository(
name = "cereal",
build_file = "//third_party:cereal.BUILD.bazel",
path = "<path-to-directory>",
)
("new_local_repository" is built-in to bazel)
Somewhere under your Bazel WORKSPACE area you'll also need to make a
cereal.BUILD.bazel file and export it from the package. I choose a directory called //third_party, but you can put it anywhere
else, and name it anything else, as long as the repository rule
provides a proper bazel label for it.) The contents might look like
this:
# contents of //third_party/cereal.BUILD.bazel
cc_library(
name = "cereal-lib",
srcs = glob(["**/*.hpp"]),
includes = ["include"],
visibility = ["//visibility:public"],
)
Bazel will pretend this was the BUILD file that "came with" the remote
repository, even though it's actually local to your repo. When Bazel fetches this remote repostiory code it copies it, and the BUILD file you provide, into its external area for caching, building, etc.
To make //third_party:cereal.BUILD.bazel a valid target in your directory, add a BUILD.bazel file to that directory:
# contents of //third_party/BUILD.bazel
exports_files = [
"cereal.BUILD.bazel",
]
Without exporting it, you won't be able to refer to the buildfile from your repository rule.
Local disk repositories aren't very portable since people may have
different versions installed and it's not very hermetic (making it
hard to share caches of builds with others), and it requires they put
them in the same place, and that kind of setup can be problematic. It
also will fail when you mix operating systems, etc, if you refer to it as "C:..."
Downloading a tarball of the library from github, for example
A better way is to download a fixed version from github, for example,
and let Bazel manage it for you in its external area:
http_archive(
name = "cereal",
sha256 = "329ea3e3130b026c03a4acc50e168e7daff4e6e661bc6a7dfec0d77b570851d5",
urls =
["https://github.com/USCiLab/cereal/archive/refs/tags/v1.3.0.tar.gz"],
build_file = "//third_party:cereal.BUILD.bazel",
)
The sha256 is important, since it downloads and computes it, compares to what you specified, and can cache it. In the future, it won't re-download it if the local file's sha matches.
Notice, it again says build_file = //third_party:cereal.BUILD.bazel., all
the same things from new_local_repository above apply here. Make sure you provide the build file for it to use, and export it from where you put it.
*To test that the remote repository is setup ok
on the command line issue
bazel fetch #cereal//:cereal-lib
I sometimes have to clear it out to make it try again, if my rule isn't quite right, but the "bad" version sticks around.
bazel clean --expunge
will remove it, but might be overkill.
Finally
We have:
defined a remote repository called #cereal
defined a target in it called cereal-lib
the target is thus #cereal//:cereal-lib
To use it
Go to the package where you would like to include cereal, and add a
dependency on this repository to the rule that builds the c++ code that would like to use cereal. That is, in your case, the BUILD rule that causes tensor_to_landmarks_calculator.cc to get built, add:
deps = [
"#cereal//:cereal-lib",
...
]
And then in your c++ code:
#include "cereal/cereal.hpp"
That should do it.
I work with a massive codebase distributed across many repositories and using even more third-party dependencies. The goal is to make the build hermetic and I contemplate using Bazel to achieve it. On the one hand, Bazel has git_repository rule to refer to the external repos in the WORKSPACE file. On the other hand, WORKSPACE files are not loaded recursively, so to get to indirect dependencies I need to build all inclusive WORKSPACE file somehow. I wonder if somebody already tackled that problem using Bazel or some other existing tools. Is there a way to expand the WORKSPACE as part of the build? May be WORKSPACE can #include other (generated) files?
WORKSPACE files can load and then call macros, which gives similar functionality to #include.
A common pattern is each project having a macro which calls macros (for dependencies on other projects) and creates *_archive rules (for dependencies directly on files to download) so it builds. For example, protobuf has protobuf_deps to implement this pattern. If you create a repository with protobuf (using git_repository, or http_archive, or any of the other repository rules), then you can load that macro and call it, and you'll automatically get all the transitive dependencies.
For example (from Chromium):
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
# This com_google_protobuf repository is required for proto_library rule.
# It provides the protocol compiler binary (i.e., protoc).
http_archive(
name = "com_google_protobuf",
strip_prefix = "protobuf-master",
urls = ["https://github.com/protocolbuffers/protobuf/archive/master.zip"],
)
load("#com_google_protobuf//:protobuf_deps.bzl", "protobuf_deps")
protobuf_deps()
I'm showing http_archive because it's easier to work with, but you can easily change it to git_archive if you want.
Another common pattern which makes this all work is the way protobuf_deps checks native.existing_rule before creating each http_archive. That allows you to instantiate a specific version (or from a specific source, etc) of the dependency directly in your WORKSPACE file to override the one protobuf would otherwise bring in.
How I can build local sources and dependancies with flatpak-builder?
I can build local sources
flatpak build ../dictionary ./configure --prefix=/app
I can extract and build application with dependancies with a .json
flatpak-builder --repo=repo dictionary2 org.gnome.Dictionary.json
But no way to build dependancies and local sources? I don't find sources type
like dir or other, only archive, git (no hg?) ...
flatpak-builder is meant to automate the whole build process, with a single entry-point: the JSON manifest.
Everything else it obtains from Git, Bazaar or tarballs. Note that for these the "url" property may be a local URL starting with file://.
(There is indeed no support for Hg. If that's important for you, feel free to request it.)
In addition to that, there are a few more source types (see the flatpak-manifest(5) manpage), which can be used to modify the extracted sources:
file which point to a local file to copy somewhere in the extracted sources;
patch which point to a local patch file to apply to the extracted sources;
script which creates a script in the extracted sources, from an array of commands;
shell which modifies the extracted sources by running an array of commands;
Adding a dir source type might be useful.
However (and I only flatpaked a few apps, and contributed 2 or 3 patches to the code, so I might be completely wrong) care must be taken as this would easily make builds completely unreproducible, which is one thing flatpak-builder tries very hard to enable.
For example, when using a local file source, flatpak-builder will base64-econde the content of that file and use it as a data:text/plain;charset=utf8;base64,<content> URL for the file which it stores in the manifest included inside the final build.
Something similar might be needed for a dir source (tar the folder then base64-encode the content of the tar?), otherwise it would be impossible to reproduce the build. I've just been told (after submitting this answer) that this changed in Git master, in favour of a new flatpak-builder --bundle-sources option. This would probably make it easier to support reproducible builds with a dir source type.
In any case, feel free to start the conversation around a new dir source type in the upstream bug tracker. :)
There's a expermental cli tool if you want to use it https://gitlab.com/csoriano/flatpak-dev-cli
You can read the docs
http://docs.flatpak.org/en/latest/building-simple-apps.html
http://docs.flatpak.org/en/latest/flatpak-builder.html
In a nutshell this is what you need to use flatpak as develop workbench
https://github.com/albfan/gnome-builder/wiki/flatpak
I am converting a project from autotools to waf with the hope that it can be easily compiled in windows as well.
I am using a super project with two children folders that are 2 projects.
One of them is a library, the other, a program, like this:
superproject/wscript
superproject/libraryproject/wscript
superproject/programproject/wscript
It seems that waf has terrible support for subprojects. I have a wscript in each of these directories.
I recurse from superproject into the 2 other projects, but the _cache.py file is shared for both projects. This has the following side effects (issues):
When using the boost tool, I had to use it like this to avoid name collisions:
# In library project
cfg.check_boost('boost_program_options', uselib_store='BOOST_LIBRARYPROJECT')
# In program project
cfg.check_boost('boost_program_options', uselib_store='BOOST_PROGRAMPROJECT')
boost-libs and boost-includes command line options are also lost by default, so I have to set them manually, like this:
cfg.env.LIBPATH_BOOST_PROGRAMPROJECT = cfg.options.boost_libs
...
The _cache.py file is overwritten by the programproject/wscript, loosing all the configuration for the flags.
Questions:
Is there any good way to nest projects and avoid at least issue 2?
Is there any reasonable way to avoid both that doesn't require a script and building projects separately?
Configuration file is not written twice.
My mistake was to do this:
cfg.env = ConfigSet()
I wanted a new and clean ConfigSet but doing that in both projects made the first set of flags to be lost.
Since the environment seems to be shared among all project configurations, is it good style to name the variables with custom names? For example, instead of using:
cfg.check_boost('program_options')
Should I use:
cfg.check_boost('program_options', uselib_store='BOOST_MYPROGRAMPROJECT')
Is this good style or it's usually done in another way?
Can be done in a cleaner way deriving ConfigSets?
I want to build a library with waf, but install it under a different name than the target name. It seems you can do
bld.shlib(..., install_path='${PREFIX}/lib')
but I need to be able to do something like:
bld.shlib(..., install_as='${PREFIX}/lib/xyz')
Also, bld.install_as() wont work, as it doesn't seem to accept a task as a target, and I can't figure out how to turn a task into a node representing the target, so the following doesnt work either:
tgt = bld.shlib(...)
bld.install_as('foo', tgt)
Or alternatively, I need to be able to disable the "lib" prefix that is automatically added to library names, but only for this one library - not for all them during the build, e.g. something like:
bld.shlib(..., libprefix='', install_path="${PREFIX}/lib/")
I know you can set shlib_PATTERN as well, but that seems to affect all libraries under the current environment. We have a pretty complicated build that uses a lot of different environments for building debug/release concurrently, so just cloning the current environment and changing the flag doesnt work either, because it clones the default environment, not the one the target will eventually be built under (because we clone the targets for each environment during build time).
Any thoughts? Thanks!
You can do this:
hello_lib = bld.shlib(
includes='/usr/include/python',
source='a.cpp',
target='hello',
uselib='BOOST_PYTHON',
vnum='0.0.1')
hello_lib.env.cxxshlib_PATTERN = '%s.so'
This code changes naming pattern for only one task.
There are two keyword arguments you can use: "name" and "target". "target" is the name of the file create while Name is the name of the target when using the "--target" argument. Confusing, but here is an example:
bld(features=['cxx','cxxshlib'],
source=src,
includes=inc,
target='OutputName',
name='NameOfTarget',
use=libs,
install_path='${PREFIX}/lib/MyLibs
)
waf configure build install --target=NameOfTarget --prefix=/home/Brian
This creates a shared library "libOutputName.so" and installs it to /home/Brian/lib/MyLib