I'm trying to do a bunch of unit tests with Cypress. Here's the npm script that runs them:
cypress run --project tests/unit/ --headless
When I run them, it generates the typical plugin/support/videos folders, but I don't need them. Is there any flag that disables the generation of these 3 folders when running the tests?
Thanks!
Just add these generated reports to a .gitignore file in the project's root like so:
# Cypress generated files #
######################
cypress.env.json
cypress.meta.json
cypress/logs/
cypress/videos/*
cypress/screenshots/*
cypress/integration/_generated/*
cypress/data/migration/generated/*.csv
cypress/fixtures/example.json
cypress/build/*
Now, these files will never be version-controlled.
You can also disable video recording with proper configuration in your cypress.json file like so: "video": false.
You can also do it with CLI by overriding your cypress.json.
Currently, there's no way to disable the generation of those files. However, you could remove them by when launching Cypress with an npm script like so:
"clean:launch:test": "rm -rf /cypress/movies && rm -rf /cypress/screenshots && cypress run --project tests/unit/ --headless"
Then you can run it like so: npm run clean:launch:test. It should remove those folders & launch Cypress's unit tests.
I suggest just adding them to .gitignore or configuring Cypress to trash them before each run. You can read about it here.
cypress.json file:
trashAssetsBeforeRuns: true
To disable the creation of video and screenshots folder you can do like in the following command.
cypress run --config video=false,screenshotOnRunFailure=false
To remove plugin/support folders I think they are not generated with current Cypress version so you can just remove them and add to .gitignore.
Video recording can be turned off entirely by setting video to false from within your configuration.
"video": false
https://docs.cypress.io/guides/guides/screenshots-and-videos#Videos
Related
When I deploy the app, it runs fine on first install. But any following eb deploy procedures fail with an error that: go.mod was found, but not expected.
Is there a specific configuration I have to set for deploying with Go modules?
I switched to Dockerizing the app and deploying that way, which works fine. But it sounds a bit cumbersome to me as AWS Elastic Beanstalk provided specific Go environments.
You can work with go modules.
build.sh
#!/usr/bin/env bash
set -xe
# get all of the dependencies needed
go get
# create the application binary that EB uses
go build -o bin/application application.go
and override GOPATH to point to $HOME which defaults to /var/app/current as given in the EB configuration management dashboard.
.ebextensions/go.config
option_settings:
aws:elasticbeanstalk:application:environment:
GOPATH: /home/ec2-user
I had the same problem, I was finally able to fix it adding this line in my build.sh script file:
sudo rm /var/app/current/go.*
So it is like this, in my case:
#!/usr/bin/env bash
# Stops the process if something fails
set -xe
sudo rm /var/app/current/go.*
# get all of the dependencies needed
go get "github.com/gin-gonic/gin"
go get "github.com/jinzhu/gorm"
go get "github.com/jinzhu/gorm/dialects/postgres"
go get "github.com/appleboy/gin-jwt"
# create the application binary that eb uses
GOOS=linux GOARCH=amd64 go build -o bin/application -ldflags="-s -w"
GitHub's Google Cloud Build integration does not detect a cloudbuild.yaml or Dockerfile if it is not in the root of the repository.
When using a monorepo that contains multiple cloudbuild.yamls, how can GitHub's Google Cloud Build integration be configured to detect the correct cloudbuild.yaml?
File paths:
services/api/cloudbuild.yaml
services/nginx/cloudbuild.yaml
services/websocket/cloudbuild.yaml
Cloud Build integration output:
You can do this by adding a cloudbuild.yaml in the root of your repository with a single gcr.io/cloud-builders/gcloud step. This step should:
Traverse each subdirectory or use find to locate additional cloudbuild.yaml files.
For each found cloudbuild.yaml, fork and submit a build by running gcloud builds submit.
Wait for all the forked gcloud commands to complete.
There's a good example of one way to do this in the root cloudbuild.yaml within the GoogleCloudPlatform/cloud-builders-community repo.
If we strip out the non-essential parts, basically you have something like this:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- '-c'
- |
for d in */; do
config="${d}cloudbuild.yaml"
if [[ ! -f "${config}" ]]; then
continue
fi
echo "Building $d ... "
(
gcloud builds submit $d --config=${config}
) &
done
wait
We are migrating to a mono-repo right now, and I haven't found any CI/CD solution that handles this well.
The key is to not only detect changes, but also any services that depend on that change. Here is what we are doing:
Requiring every service to have a MAKEFILE with a build command.
Putting a cloudbuild.yaml at the root of the mono repo
We then run a custom build step with this little tool (old but still seems to work) https://github.com/jharlap/affected which lists out all packages have changed and all packages that depend on those packages, etc.
then the shell script will run make build on any service that is affected by the change.
So far it is working well, but I totally understand if this doesn't fit your workflow.
Another option many people use is Bazel. Not the most simple tool, but especially great if you have many different languages or build processes across your mono repo.
You can create a build trigger for your repository. When setting up a trigger with cloudbuild.yaml for build configuration, you need to provide the path to the cloudbuild.yaml within the repository.
I am building a .NET Core 2.0 web API and I am creating a Docker image. I am quite new to Docker so apologies if the question has been answered before.
I have the following Docker file for creating the image. In particular, I run the unit tests during the build process and the results are output to ./test/test_results.xml (in a temporary container created during the build, I guess). My question is, how do I access these test results after the build has finished?
FROM microsoft/aspnetcore-build:2.0 AS build-env
WORKDIR /app
# Copy main csproj file for DataService
COPY src/DataService.csproj ./src/
RUN dotnet restore ./src/DataService.csproj
# Copy test csproj file for DataService
COPY test/DataService.Tests.csproj ./test/
RUN dotnet restore ./test/DataService.Tests.csproj
# Copy everything else (excluding elements in dockerignore)
COPY . ./
# Run the unit tests
RUN dotnet test --results-directory ./ --logger "trx;LogFileName=test_results.xml" ./test/DataService.Tests.csproj
# Publish the app to the out directory
RUN dotnet publish ./src/DataService.csproj -c Release -o out
# Build the runtime image
FROM microsoft/aspnetcore:2.0
WORKDIR /app
EXPOSE 5001
COPY --from=build-env /app/src/out .
# Copy test results to the final image as well??
# COPY --from=build-env /app/test/test_results.xml .
ENTRYPOINT ["dotnet", "DataService.dll"]
One approach that I have taken is to comment in the line # COPY --from=build-env /app/test/test_results.xml .. This puts the test_results.xml in my final image. I can then extract these results and remove the test_results.xml from the final image using the following powershell script.
$id=$(docker create dataservice)
docker cp ${id}:app/test_results.xml ./test/test_results.xml
docker start $id
docker exec $id rm -rf /app/test_results.xml
docker commit $id dataservice
docker rm -vf $id
This however seems ugly and I am wondering is there a cleaner way to do it.
I was hoping that there was a way to mount a volume during docker build but it does not appear that this is going to be supported in the official Docker.
I am looking now at creating a separate image, solely for the unit tests.
Not sure if there is a recommended way of achieving what I want.
Thanks for your question - I needed to solve the same thing.
I added a separate container stage based on results of the build. The tests and its output are all handled in there so they never reach the final container. So build-env is used to build and then an intermediate test container is based on that build-env image and final is based on runtime container with results of build-env copied in.
# ---- Test ----
# run tests and capture results for later use. This use the results of the build stage
FROM build AS test
#Use label so we can later obtain this container from the multi-stage build
LABEL test=true
WORKDIR /
#Store test results in a file that we will later extract
RUN dotnet test --results-directory ../../TestResults/ --logger "trx;LogFileName=test_results.xml" "./src/ProjectNameTests/ProjectNameTests.csproj"
I added a shell script as a next step that then tags the image as project-test.
#!bin/bash
id=`docker images --filter "label=test=true" -q`
docker tag $id projectname-test:latest
After that, I basically do what you do which is use docker cp and get the file out. The difference is my test results were never in the final image so I don't touch the final image.
Overall I think the correct way to handle tests is probably create a test image (based on the build image) and run it with a mounted volume for test results and have it run the unit tests when that container starts. Having a proper image/container would also allow you to run integration tests etc. This article is older but details similar https://blogs.infosupport.com/build-deploy-test-aspnetcore-docker-linux-tfs2015/
This is an old question, but I ended up here looking for the same thing so will drop my tuppence worth in here for posterity!
Nowadays you can use DOCKER_BUILDKIT=1 to make docker use buildkit to build your images, which is quicker and caches better, and has an --output option which solves this problem for you. I've got a golang-based example below, but this should work equally well for pretty much anything.
# syntax=docker/dockerfile:1.2
FROM golang:1.17 as deps
WORKDIR /src
COPY go.mod go.sum ./
RUN go mod download
COPY command ./command
COPY internal ./internal
# Tests
FROM deps as test
RUN --mount=type=cache,target=/root/.cache go test -v ./... 2>&1 | tee unit.out
FROM scratch as test-output
COPY --from=test /src/unit.out /unit.out
# Build
FROM deps as build
RUN your build steps
This dockerfile has a bunch of stages:
copy across requirements spec and install dependencies (for caching)
run your tests (re-use deps stage to avoid repeating ourselves)
copy your test results to a scratch image
build your image / binary / whatever you'd normally do from deps
Now if you set DOCKER_BUILDKIT=1 somewhere and run:
docker build
then docker builds your code / image but DOES NOT RUN THE TESTS! Because your build stage isn't linked to the test stage at all it bypasses both test stages entirely. However, you can now use the --target option to select the test-output build stage and the --output option to tell it where on your local disk to copy the result:
docker build --target test-output --output results .
Docker will then build the dependencies (or re-use the cache from the image build if you did that first), run your tests and copy the contents of your scratch image (i.e. your test report) into the results directory. Job done. :)
Edit:
An article using this approach with a .NET app and explaining the whole thing a bit better! https://kevsoft.net/2021/08/09/exporting-unit-test-results-from-a-multi-stage-docker-build.html
I'm trying to figure out how to get code coverage working with #angular/cli but so far i'm not having much luck.
I started a new project using angular CLI. Basically all i did was ng new test-coverage and once everything was installed in my new project folder, I did a ng test --code-coverage. The tests were run successfully but nothing resembling code coverage was displayed in the browser.
Am I missing some dependencies or something else? Any help will be appreciated.
EDIT:
R. Richards and Rachid Oussanaa were right, the file does get generated and I can access it by opening the index.html.
Now i'm wondering, is there a way I could integrate that into a node command so that the file opens right after the tests are run?
here's what you can do:
install opn-cli which is a cli for the popular opn package which is a cross-platform tool used to open files in their default apps.
npm install -D opn-cli -D to install as dev dependency.
in package.json add a script under scripts as follows
"scripts": {
...
"test-coverage": "ng test --code-coverage --single-run && opn ./coverage/index.html"
}
now run npm run test-coverage
this will run the script we defined. here is an explanation of that script:
ng test --code-coverage --single-run will run tests, with coverage, only ONCE, hence --single-run
&& basically executes the second command if the first succeeds
opn ./coverage/index.html will open the file regardless of platform.
In my use case I am setting up a single go test which runs all _test.go in all packages in the project folder. I tried to achieve this using $go test ./... from the src folder of the project
/project-name
/src
/mypack
/dao
/util
When I try to run the test it is asking to install the packages which are used in the imported packages. For example if I import "github.com/go-sql-driver/mysql", it might have used another package github.com/golang/protobuf/proto. I did not manually import the proto package. The application runs without manually importing the inner package. But when I run the tests it fails. But individual package test succeeded. Do I have to install all the packages in the $go test ./... error manually?
Could anyone help me on this?
You need to run go get -t ./... first to get all test deps.
From the go test -h:
The -t flag instructs get to also download the packages required to
build the tests for the specified packages.