In my Travis CI script on Github, I have the following condition, where the default profile is run if commits are pushed to a remote branch. And test profile if it is a pull request.
script:
- 'if [ "$TRAVIS_EVENT_TYPE" == "push" ]; then
mvn clean install;
else
mvn clean install -P test;
fi'
Now the problem I am facing is that if condition also runs when there is a merge from feature to develop branch which I do not want. I want the if condition to run only when there is a push from local feature or bugfix branch to remote. To handle this I have added regular expression to match the branches like below.
script:
- 'if [ "$TRAVIS_EVENT_TYPE" == "push" && "$TRAVIS_BRANCH" =~ ^(feature|bugfix)]; then
mvn clean install;
else
mvn clean install -P test;
fi'
But it gives the following error:
The command "if [ "$TRAVIS_EVENT_TYPE" == "push" && "$TRAVIS_BRANCH" =~ ^(feature|bugfix)]; then mvn clean install; else mvn clean install -P test; fi" exited with 1.
Edit: I think matching the branch condition won't help in achieving what I am trying to do here. Since the merge will be from feature-* or bugfix-* to develop branch. So I think this additional branch check is redundant here.
So the build rules I am trying to implement are:
If it is a commit push from local branches to remote branches, then run the default profile with mvn clean install.
If it is a new pull request or pull request merge, then run the test profile with mvn clean install -P test
What will be the right checks to achieve this in Travis CI script?
Related
After I upgraded gitlab yesterday, my gitlab-ci.yml file reported an error
The error message is jobs:gitkeep:only configshould be an array of strings or regexps
My config file content is
gitkeep:
stage: eslint
tags:
- chore
only:
- develop
- /^feature\/((?!snapshot$|latest$|release\/).)+?$/
- /^release\/((?!snapshot$|latest$).)+?$/
- /^dev\/((?!snapshot$|latest$).)+?$/
script:
- docker build -t mdf-modify-all .
- docker run --cpus="2.5" --rm mdf-modify-all sh -c "sh release-all.sh && git add . && git commit -m 'release all' && git push origin HEAD:$CI_COMMIT_REF_NAME"
when: manual
In some online verification tools, these configurations are legal. But in CI lint, it prompts that the regularity is illegal. How can I modify this regularity?
After I delete this regular expression, the verification is passed, but I want to keep this regular expression, how can I modify it?
I have specified multiple deploy script providers. One which is expected to run, but skipped is:
- provider: script
skip_cleanup: true
script: curl -sL https://git.io/goreleaser | bash -s -- --rm-dist --skip-publish
verbose: true
on:
condition: "$TRAVIS_OS_NAME = linux"
branches:
only:
- /^release\/.*$/
go: 1.11.x
Last three deploys are only on master branch, so its right to skip them.
The first deploy which is on all branches matching regexp: /^release/.*$/ should run for this branch release/2.1.5. However it’s skipping this deploy too.
Can someone point why the release branch deploy is skipped in this case? I want the first deploy to run on linux and release branches only like: release/2.1.5.
Travis build: https://travis-ci.org/leopardslab/dunner/jobs/560593148
Travis Config file: https://github.com/leopardslab/dunner/blob/172a4c5792b0a8389556cc8ee4f690dc73fafb6e/.travis.yml
To run the deploy script conditionally, say only on branches matching a regular expression like release branches as ^release\/.*$, use condition field and concatenate multiple conditions using &&. The branch or branches field does not support regexp.
If branch or branches field is not specified, travis assumes its on master branch(or default branch). So be sure to include all_branches: true line in travis config.
You can access current branch name using travis env variable $TRAVIS_BRANCH.
deploy:
- provider: script
script: bash myscript.sh
verbose: true
on:
all_branches: true
condition: $TRAVIS_BRANCH =~ ^release\/.*$ && $TRAVIS_OS_NAME = linux
I am building a .NET Core 2.0 web API and I am creating a Docker image. I am quite new to Docker so apologies if the question has been answered before.
I have the following Docker file for creating the image. In particular, I run the unit tests during the build process and the results are output to ./test/test_results.xml (in a temporary container created during the build, I guess). My question is, how do I access these test results after the build has finished?
FROM microsoft/aspnetcore-build:2.0 AS build-env
WORKDIR /app
# Copy main csproj file for DataService
COPY src/DataService.csproj ./src/
RUN dotnet restore ./src/DataService.csproj
# Copy test csproj file for DataService
COPY test/DataService.Tests.csproj ./test/
RUN dotnet restore ./test/DataService.Tests.csproj
# Copy everything else (excluding elements in dockerignore)
COPY . ./
# Run the unit tests
RUN dotnet test --results-directory ./ --logger "trx;LogFileName=test_results.xml" ./test/DataService.Tests.csproj
# Publish the app to the out directory
RUN dotnet publish ./src/DataService.csproj -c Release -o out
# Build the runtime image
FROM microsoft/aspnetcore:2.0
WORKDIR /app
EXPOSE 5001
COPY --from=build-env /app/src/out .
# Copy test results to the final image as well??
# COPY --from=build-env /app/test/test_results.xml .
ENTRYPOINT ["dotnet", "DataService.dll"]
One approach that I have taken is to comment in the line # COPY --from=build-env /app/test/test_results.xml .. This puts the test_results.xml in my final image. I can then extract these results and remove the test_results.xml from the final image using the following powershell script.
$id=$(docker create dataservice)
docker cp ${id}:app/test_results.xml ./test/test_results.xml
docker start $id
docker exec $id rm -rf /app/test_results.xml
docker commit $id dataservice
docker rm -vf $id
This however seems ugly and I am wondering is there a cleaner way to do it.
I was hoping that there was a way to mount a volume during docker build but it does not appear that this is going to be supported in the official Docker.
I am looking now at creating a separate image, solely for the unit tests.
Not sure if there is a recommended way of achieving what I want.
Thanks for your question - I needed to solve the same thing.
I added a separate container stage based on results of the build. The tests and its output are all handled in there so they never reach the final container. So build-env is used to build and then an intermediate test container is based on that build-env image and final is based on runtime container with results of build-env copied in.
# ---- Test ----
# run tests and capture results for later use. This use the results of the build stage
FROM build AS test
#Use label so we can later obtain this container from the multi-stage build
LABEL test=true
WORKDIR /
#Store test results in a file that we will later extract
RUN dotnet test --results-directory ../../TestResults/ --logger "trx;LogFileName=test_results.xml" "./src/ProjectNameTests/ProjectNameTests.csproj"
I added a shell script as a next step that then tags the image as project-test.
#!bin/bash
id=`docker images --filter "label=test=true" -q`
docker tag $id projectname-test:latest
After that, I basically do what you do which is use docker cp and get the file out. The difference is my test results were never in the final image so I don't touch the final image.
Overall I think the correct way to handle tests is probably create a test image (based on the build image) and run it with a mounted volume for test results and have it run the unit tests when that container starts. Having a proper image/container would also allow you to run integration tests etc. This article is older but details similar https://blogs.infosupport.com/build-deploy-test-aspnetcore-docker-linux-tfs2015/
This is an old question, but I ended up here looking for the same thing so will drop my tuppence worth in here for posterity!
Nowadays you can use DOCKER_BUILDKIT=1 to make docker use buildkit to build your images, which is quicker and caches better, and has an --output option which solves this problem for you. I've got a golang-based example below, but this should work equally well for pretty much anything.
# syntax=docker/dockerfile:1.2
FROM golang:1.17 as deps
WORKDIR /src
COPY go.mod go.sum ./
RUN go mod download
COPY command ./command
COPY internal ./internal
# Tests
FROM deps as test
RUN --mount=type=cache,target=/root/.cache go test -v ./... 2>&1 | tee unit.out
FROM scratch as test-output
COPY --from=test /src/unit.out /unit.out
# Build
FROM deps as build
RUN your build steps
This dockerfile has a bunch of stages:
copy across requirements spec and install dependencies (for caching)
run your tests (re-use deps stage to avoid repeating ourselves)
copy your test results to a scratch image
build your image / binary / whatever you'd normally do from deps
Now if you set DOCKER_BUILDKIT=1 somewhere and run:
docker build
then docker builds your code / image but DOES NOT RUN THE TESTS! Because your build stage isn't linked to the test stage at all it bypasses both test stages entirely. However, you can now use the --target option to select the test-output build stage and the --output option to tell it where on your local disk to copy the result:
docker build --target test-output --output results .
Docker will then build the dependencies (or re-use the cache from the image build if you did that first), run your tests and copy the contents of your scratch image (i.e. your test report) into the results directory. Job done. :)
Edit:
An article using this approach with a .NET app and explaining the whole thing a bit better! https://kevsoft.net/2021/08/09/exporting-unit-test-results-from-a-multi-stage-docker-build.html
I am creating a build with Appveyor on Github use devtool https://github.com/atom/atom-keymap. Although Travis builds success, Appveyor builds still appear error!
I do not know real root cause, but I think I can help with a way to troubleshoot this. Basically you can connect to AppVeyor VM via RDP and debug it. Here are the steps:
Insert - ps: $blockRdp = $true; iex ((new-object net.webclient).DownloadString('https://raw.githubusercontent.com/appveyor/ci/master/scripts/enable-rdp.ps1')) before - npm run ci in your appveyor.yml file.
In RDP run the following:
cd c:\projects\atom-keymap
npm run compile
npm run lint
This will bring you to the state to get a repro and debug (because npm run ci is npm run compile && npm run lint && npm run test).
To get a repro npm run test.
To debug the problem, do something like this:
devtool --console node_modules/mocha/bin/_mocha --colors spec/helpers/setup.js spec/* --break
(this will let you debug step-by-step)
or
devtool --console node_modules/mocha/bin/_mocha --colors spec/helpers/setup.js spec/* --watch
(this will let you see a lot of error details)
This is the same what npm run test does, but without switch to quit on error and with debug options.
I went this route myself till this point but my limited knowledge of this npm module did not let me to dig till the root cause.
I have a build script that is triggered by Jenkins.
First Jenkins will get the latest version from the repo (Bitbucket) and then it will initiate the build script.
Now if the build script is started in 'release' mode the script will make changes to some files (to keep track of version numbers and build dates, and to create a tag on the repo)
These changes need to be pushed back up to the remote repo.
How do I implement this?
The build takes a couple of minutes, so if someone pushes to the remote repo during the build then the push will fail because a merge is needed first. If that was not the case the merge will fail because there was nothing to merge...
Consider having Jenkins do its commits in a named branch all its own. This has a lot of advantages -- the biggest being that Jenkins never has to worry about someone else pushing a change to the release branch -- only Jenkins will be. Your Jenkins build script could look something like this:
hg clone --updaterev release http://path/to/repo
hg merge default || true # merge the latest from master
...build here...
hg commit -m "Auto commit from Jenkins for build $BUILDNUMBER" || true
hg tag build_$BUILDNUMBER
hg push
With a setup like that you're getting some advantages:
failed builds aren't creating new commits
Jenkins's push will always succeed
Jenkins's tag commits are in the 'release' branch, but still accessible from the default branch
Notice that the || true tells Jenkins not to fail the build on non-zero exit codes for merge (if there's nothing to merge) and nothing to commit.
Instead of cloning fresh each time you could just hg pull ; hg update -C release but for repos of reasonable size I like to start w/ a guaranteed clean slate.