How to run bash commands like "npm install" on complie - yesod

I need to run npm install && gulp build inside my static/semantic-ui folder, so it creates the needed css file.
I saw this example with Setup.hs, however on my scaffolded project I don't have it, so my question where is the right place to put the code to run those bash commands.

If you're using the default Yesod scaffolding (generated by stack tool), then it indeed doesn't contain Setup.hs (which is a bit weird, as their own guide - https://github.com/commercialhaskell/stack/blob/master/doc/GUIDE.md - recommends having it as a good practice)
Setup.hs should be located in main project directory (same where stack.yml and yourproject.cabal are located) and content should be roughly the same as in your included example (defaultMainWithHooks is the key part).
Details of hooks usage are specified in https://www.haskell.org/cabal/users-guide/developing-packages.html and in cabal spec: https://hackage.haskell.org/package/Cabal-1.24.0.0/docs/Distribution-Simple.html
BTW, for now stack doesn't support pre-build hooks on its own (for details see: https://github.com/commercialhaskell/stack/issues/503), so you have to stick to ones provided by cabal - that's where Setup.hs comes from.

Related

mypy pre-commit check on a Django project with Pipenv

I would like to run mypy static type checks with pre-commit and thus have the below config in .pre-commit-config.yaml
repos:
- repo: https://github.com/pre-commit/mirrors-mypy
rev: 'v1.0.0'
hooks:
- id: mypy
args: ['--ignore-missing-imports', '--cache-dir', '/dev/null', '--show-error-codes']
However, I get errors like No module named 'mypy_django_plugin', and basically the same for all project dependencies. I am aware additional_dependencies should be used to list dependencies. The thing is, the project has 30+ dependencies and manually listing them here is a bit impractical (same goes for keeping them in sync with Pipfile).
I have also read elsewhere (SO, pre-commit GitHub) that installing dependencies dynamically from a requirements file is not supported.
So is it somehow possible to populate additional_dependencies from a pipfile.lock? Is there a solution for Pipenv similar to the one existing for Poetry (see link below.). Or is there another workaround to make mypy work?
PS:
How to have a single source of truth for poetry and pre-commit package version? deals with the same problem, but the post and solution are Poetry specific.

How to use docker to test multiple compiler versions

What is the idiomatic way to write a docker file for building against many different versions of the same compiler?
I have a project which tests against a wide-range of versions of different compilers like gcc and clang as part of a CI job. At some point, the agents for the CI tasks were updated/changed, resulting in newer jobs failing -- and so I've started looking into dockerizing these builds to try to guarantee better reliability and stability.
However, I'm having some difficulty understanding what a proper and idiomatic approach is to producing build images like this without causing a large amount of duplication caused by layers.
For example, let's say I want to build using the following toolset:
gcc 4.8, 4.9, 5.1, ... (various versions)
cmake (latest)
ninja-build
I could write something like:
# syntax=docker/dockerfile:1.3-labs
# Parameterizing here possible, but would cause bloat from duplicated
# layers defined after this
FROM gcc:4.8
ENV DEBIAN_FRONTEND noninteractive
# Set the work directory
WORKDIR /home/dev
COPY . /home/dev/
# Install tools (cmake, ninja, etc)
# this will cause bloat if the FROM layer changes
RUN <<EOF
apt update
apt install -y cmake ninja-build
rm -rf /var/lib/apt/lists/*
EOF
# Default command is to use CMak
CMD ["cmake"]
However, the installation of tools like ninja-build and cmake occur after the base image, which changes per compiler version. Since these layers are built off of a different parent layer, this would (as far as I'm aware) result in layer duplication for each different compiler version that is used.
One alternative to avoid this duplication could hypothetically be using a smaller base image like alpine with separate installations of the compiler instead. The tools could be installed first so the layers remain shared, and only the compiler changes as the last layer -- however this presents its own difficulties, since it's often the case that certain compiler versions may require custom steps, such as installing certain keyrings.
What is the idiomatic way of accomplishing this? Would this typically be done through multiple docker files, or a single docker file with parameters? Any examples would be greatly appreciated.
I would separate the parts of preparing the compiler and doing the calculation, so the source doesn't become part of the docker container.
Prepare Compiler
For preparing the compiler I would take the ARG approach but without copying the data into the container. In case you wanna fast retry while having enough resources you could spin up multiple instances the same time.
ARG COMPILER=gcc:4.8
FROM ${COMPILER}
ENV DEBIAN_FRONTEND noninteractive
# Install tools (cmake, ninja, etc)
# this will cause bloat if the FROM layer changes
RUN <<EOF
apt update
apt install -y cmake ninja-build
rm -rf /var/lib/apt/lists/*
EOF
# Set the work directory
VOLUME /src
WORKDIR /src
CMD ["cmake"]
Build it
Here you have few options. You could either prepare a volume with the sources or use bind mounts together with docker exec like this:
#bash style
for compiler in gcc:4.9 gcc:4.8 gcc:5.1
do
docker build -t mytag-${compiler} --build-arg COMPILER=${compiler} .
# place to clean the target folder
docker run -v $(pwd)/src:/src mytag-${compiler}
done
And because the source is not part of the docker image you don't have bloat. You can also have two mounts, one for a readonly source tree and one for the output files.
Note: If you remove the CMake command you could also spin up the docker containers in parallel and use docker exec to start the build. The downside of this is that you have to take care of out of source builds to avoid clashes on the output folder.
put an ARG before the FROM and then invoke the ARG as the FROM
so:
ARG COMPILER=gcc:4.8
FROM ${COMPILER}
# rest goes here
then you
docker build . -t test/clang-8 --build-args COMPILER=clang-8
or similar.
If you want to automate just make a list of compilers and a bash script looping over the lines in your file, and paste the lines as inputs to the tag and COMPILER build args.
As for Cmake, I'd just do:
RUN wget -qO- "https://cmake.org/files/v3.23/cmake-3.23.1-linux-"$(uname -m)".tar.gz" | tar --strip-components=1 -xz -C /usr/local
When copying, I find it cleaner to do
WORKDIR /app/build
COPY . .
edit: formatting
As far as I know, there is no way to do that easily and safely. You could use a RUN --mount=type=cache, but the documentation clearly says that:
Contents of the cache directories persist between builder invocations without invalidating the instruction cache. Cache mounts should only be used for better performance. Your build should work with any contents of the cache directory as another build may overwrite the files or GC may clean it if more storage space is needed.
I have not tried it but I guess the layers are duplicated anyway, you just save time, assuming the cache is not emptied.
The other possible solution you have is similar to the one you mention in the question: starting with the tools installation and then customizing it with the gcc image. Instead of starting with an alpine image, you could start FROM scratch. scratch is basically the empty image, you could COPY the files generated by
RUN <<EOF
apt update
apt install -y cmake ninja-build
rm -rf /var/lib/apt/lists/*
EOF
Then you COPY the entire gcc filesystem. However, I am not sure it will work because the order of the initial layers is now reversed. This means that some files that were in the upper layer (coming from tools) now are in the lower layer and could be overwritten. In the comments, I asked you for a working Dockerfile because I wanted to try this out before answering. If you want, you can try this method and let us know. Anyway, the first step is extracting the files created from the tools layer.
How to extract changes from a layer?
Let's consider this Dockerfile and build it with docker build -t test .:
FROM debian:10
RUN apt update && apt install -y cmake && ( echo "test" > test.txt )
RUN echo "new test" > test.txt
Now that we have built the test image, we should find 3 new layers. You mainly have 2 ways to extract the changes from each layer:
the first is docker inspecting the image and then find the ids of the layers in the /var/lib/docker folder, assuming you are on Linux. Each layer has a diff subfolder containing the changes. Actually, I think it is more complex than this, that is why I would opt for...
skopeo: you can install it with apt install skopeo and it is a very useful tool to operate on docker images. The command you are interested in is copy, that extracts the layers of an image and export them as .tar:
skopeo copy docker-daemon:{image_name}:latest "dir:/home/test_img"
where image_name is test in this case.
Extracting layer content with Skopeo
In the specified folder, you should find some tar files and a configuration file (look at the skopeo copy command output and you will know which one is that). Then extract each {layer}.tar in a different folder and you are done.
Note: to find the layer containing your tools just open the configuration file (maybe using jq because it is json) and take the diff_id that corresponds to the RUN instruction you find in the history property. You should understand it once you open the JSON configuration. This is unnecessary if you have a small image that has, for example, debian as parent image and a single RUN instruction containing the tools you want to install.
Get GCC image content
Now that we have the tool layer content, we need to extract the gcc filesystem. we don't need skopeo for this one, but docker export is enough:
create a container from gcc (with the tag you need):
docker create --name gcc4.8 gcc:4.8
export it as tar:
docker export -o gcc4.8.tar gcc4.8
finally extract the tar file.
Putting all together
The final Dockerfile could be something like:
FROM scratch
COPY ./tools_layer/ /
COPY ./gcc_4.x/ /
In this way, the tools layer is always reused (unless you change the content of that folder, of course), but you can parameterize the gcc_4.x with the ARG instruction for example.
Read carefully: all of this is not tested but you might encounter 2 issues:
the gcc image overwrites some files you have changed in the tools layer. You could check if this happens by computing the diff between the gcc layer folder and the tools layer folder. If it happens, you can only keep track of that file/s and add it/them in the dockerfile after the COPY ./gcc ... with another COPY.
When in the upper layer a file is removed, docker marks that file with a .wh extension (not sure if it is different with skopeo). If in the tools layer you delete a file that exists in the gcc layer, then that file will not be deleted using the above Dockerfile (the COPY ./gcc ... instruction would overwrite the .wh). In this case too, you would need to add an additional RUN rm ... instruction.
This is probably not the correct approach if you have a more complex image that the one you are showing us. In my opinion, you could give this a try and just see if this works out with a single Dockerfile. Obviously, if you have many compilers, each one having its own tools set, the maintainability of this approach could be a real burden. Instead, if the Dockerfile is more or less linear for all the compilers, this might be good (after all, you do not do this every day).
Now the question is: is avoiding layer replication so important that you are willing to complicate the image-building process this much?

Nix: Building `waf` produces a file, but I seem to need a folder

I've cloned the nixpkgs repo. From the top of that repo, I can run nix-build -A waf to build waf, and nix-env -f . -iA waf to make waf part of my user environment. Neither complains -- but afterward I am still unable to call waf:
[jeff#jbb-dell:~/nix/nixpkgs]$ waf
waf: command not found
[jeff#jbb-dell:~/nix/nixpkgs]$
Most packages, when I build them using nix-build -A, produce a symlink called result that goes to a folder containing the executable in question. Strangely, though, in waf's case the symlink is to a file, not a folder.
I'm running NixOS. If I add waf to environment.systemPackages in my configuration, upon building I get an error that seems to be a result of the strangeness described in the previous paragraph:
[jeff#jbb-dell:~/nix/jbb-config]$ sudo nixos-rebuild switch
building Nix...
building the system configuration...
these derivations will be built:
/nix/store/s618gllra3g2vn62c92advg9ks2swkz1-system-path.drv
/nix/store/gpph3adrgn949mikfvkwld86flshdbvq-unit-polkit.service.drv
/nix/store/i7xql7889ank54fnhd16zk4z79l1ix88-unit-systemd-fsck-.service.drv
/nix/store/dv9p4fsrqn1fwdvy9scyc7g9422wvm7c-dbus-1.drv
/nix/store/y730jf9s9nrzmkf55i01nlwinw5gxpsp-unit-dbus.service.drv
/nix/store/4wjan71p2di7lscnscdfhp55j49dcymx-system-units.drv
/nix/store/qrzwrpsz0hh5gzaxic6ww8mnwl03zwil-unit-dbus.service.drv
/nix/store/lhq0s9s5v3sqvjx6mqlyqj6hf4kv38sf-user-units.drv
/nix/store/hk5wbmf4dpna3dn0h0q1balj3482l6xd-etc.drv
/nix/store/yj3lfyv5sbp751xzy9jdw1d06n9gdiin-nixos-system-jbb-dell-19.09.1889.692a8cabbcc.drv
building '/nix/store/s618gllra3g2vn62c92advg9ks2swkz1-system-path.drv'...
The store path /nix/store/f1ylicjswpfx1wbvxapsnwy987qnlxl6-waf-2.0.18 is a file and can't be merged into an environment us ing pkgs.buildEnv! at /nix/store/kncarzyhspzsplkcmmyiqg2cavrwr373-builder.pl line 96.
builder for '/nix/store/s618gllra3g2vn62c92advg9ks2swkz1-system-path.drv' failed with exit code 2
cannot build derivation '/nix/store/yj3lfyv5sbp751xzy9jdw1d06n9gdiin-nixos-system-jbb-dell-19.09.1889.692a8cabbcc.drv': 1 dependencies couldn't be built
error: build of '/nix/store/yj3lfyv5sbp751xzy9jdw1d06n9gdiin-nixos-system-jbb-dell-19.09.1889.692a8cabbcc.drv' failed
[jeff#jbb-dell:~/nix/jbb-config]$
This looks like an implementation error to me. waf, as a top-level Nixpkgs package, should put its binary in $out/bin.
I've checked for usages of waf in nixpkgs and it seems to be used inside derivations only via wafHook.
If you only need waf inside a derivation, I recommend going with wafHook, following the example of other packages. If you need to install it in your user profile, ideally you can send a PR to make waf a proper package or you can work around it with a custom derivation.

Radish Test Framework

I am new to the Radish Test Framework and got confused by the way of executing the test using feature files. Please clarify on:
I have installed radish module(pip install radish-bdd) already. Is there any other step to follow?
In the example, it shows to execute like "radish calculator.feature" (sample eg)
How radish behaves as the command, instead i have got it as a directory "/usr/lib/python2.7/site-packages/radish" after pip install.
I'm not entirely sure what the question is. If I understand you, you're having trouble executing tests. With radish installed you'll want to run radish <path_to_feature_file> So for example, lets say that we have a feature file called test.feature which is located in the same directory as we are, we have a few options...
Run only that feature file
radish test.feature
Run every feature file in the current directory
radish .

How can NPM scripts use my current working directory (when in nested subfolder)

It's good that I can run NPM scripts not only from the project root but also from the subfolders. However, with constraint that it can't tell my current working path ($PWD).
Let's say there's a command like this:
"scripts": {
...
"pwd": "echo $PWD"
}
If I run npm run pwd within a subfolder of the project root (e.g, $PROJECT_ROOT/src/nested/dir), instead of printing out my current path $PROJECT_ROOT/src/nested/dir, it always gives $PROJECT_ROOT back. Are there any way to tell NPM scripts to use my current working directory instead of resolving to where package.json resides?
Basically I want to pull a Yeoman generator into an existing project and use it through NPM scripts so that everyone can use the shared knowledge (e.g, npm run generator) instead of learning anything Yeoman specific (e.g npm i yo -g; yo generator). As the generator generates files based on current working path, while NPM scripts always resolves to the project root, I can't use the generator where it intend to be used.
If you want your script to use different behavior based on what subdirectory you’re in, you can use the INIT_CWD environment variable, which holds the full path you were in when you ran npm run.
Source: https://docs.npmjs.com/cli/run-script
Use it like so:
"scripts": {
"start": "live-server $INIT_CWD/somedir --port=8080 --no-browser"
}
Update 2019-11-19
$INIT_CWD only works on *nix-like platforms. Windows would need %INIT_CWD%. Kind of disappointing that Node.js doesn't abstract this for us. Solution: use cross-env-shell live-server $INIT_CWD/somedir.... -> https://www.npmjs.com/package/cross-env
One known solution is through ENV variable injection.
For example:
Define scripts in package.json:
"pwd": "cd $VAR && echo $PWD"
Call it from anywhere sub directories:
VAR=$(pwd) npm run pwd
However, this looks really ugly, are there any cleaner/better solutions?
With node 8+ you can automate the ENV variable injection.
1.- In $HOME/.node_modules/ (a default node search path) create a file mystart with
process.env.ORIGPWD = process.env.PWD
2.- Then in your $HOME/.bashrc tell node to load mystart every time
export NODE_OPTIONS="-r mystart"
3.- Use $ORIGPWD in your scripts. That works for npm, yarn and others.