Is there any way to only run a matrix build in travis on deploy? Right now we use the same .travis.yml file for test and deploy, and a matrix build (and thus two workers) is triggered in both cases. I can't find a way to only run the build as a matrix in the case in which we are deploying and not when we are running tests (or perhaps to only use a matrix during the deploy process). The main reason I'd like to do this is so that I don't trigger extra builds when PRs are created and I just need the test build to run.
I also couldn't find a simple way we could run a single build for npm install/npm test and then spin off two separate workers/a matrix for the "deploy" process, which would also solve the problem.
Here's a snip of my current .travis.yml file:
language: node_js
node_js: 4.2.1
env:
global:
- APP_NAME=example
matrix:
- CF_DOMAIN=example1.net CF_TARGET=https://target1.com APP_NAME=${APP_NAME}-1
- CF_DOMAIN=example2.net CF_TARGET=https://target2.com APP_NAME=${APP_NAME}-2
branches:
only:
- master
deploy:
- provider: script
skip_cleanup: true
script: node_modules/.bin/deploy.sh
on:
branch: master
It might also work for us to only run a matrix build on a push hook, but not on a pr.
There was a similar issue posted in GitHub for Travis. It was suggested to use two separate .travis.yml files.
https://github.com/travis-ci/travis-ci/issues/2778
Related
I have been following this official documentation on how to get parallel builds running in AWS CodeBuild using a batch matrix. Right now my buildspec.yml is structured like this:
## buildspec.yml
version: 0.2
batch:
fast-fail: false
build-matrix:
dynamic:
env:
variables:
INSTANCES:
- A
WORKERS:
- 1
- 2
phases:
install:
commands:
- npm ci
build:
commands:
- npx cypress run <params>
In this example we run two parallel workers, though IRL we run 11.
This works well for one use case, where we check out the code and run the Cypress tests against the pre-defined URL of one of our test environments. However, we have another use-case where we need to build the application within the CodeBuild container, start a server on localhost, and then run the Cypress tests against that.
One option, of course, is just to build the app 11 times. However, since CodeBuild pricing is by the machine minute, I'd rather build once instead of 11 times. I also don't like the idea of technically testing 11 different builds (albeit all built off the same commit).
What I'm looking for is behavior similar to Docker's multi-stage build functionality, where you can build the app once in one environment, and then copy that artifact to 11 separate envs, where the parallel tests will then run. Is functionality like this going to be possible within CodeBuild itself, or will I have to do something like have two CodeBuild builds and upload the artifact to S3? Any and all ideas welcome.
Cloud Build fails with Timeout Error (I'm trying to deploy CloudRun with Prophet). Therefore I'm trying to split the Dockerfile into two (saving the image in between in case it fails). I'd split the Dockerfile like this:
Dockerfile_one: python + prophet's dependencies
Dockerfile_two: image_from_Dockerfile_one + prophet + other dependencies
What should cloudbuild.yaml should look like to:
if there is a previously image available skip the step, else run the step with the Dockerfile_one and save the image
use the image from the step (1), add more dependencies to it and save the image for deploy
Here is how cloudbuild.yaml looks like right now
steps:
# create gcr source directory
- name: 'bash'
args:
- '-c'
- |
echo 'Creating gcr_source directory for ${_GCR_NAME}'
mkdir _gcr_source
cp -r cloudruns/${_GCR_NAME}/. _gcr_source
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/${_GCR_NAME}', '.']
dir: '_gcr_source'
# Push the container image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/${_GCR_NAME}']
# Deploy container image to Cloud Run
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: gcloud
args:
- run
- deploy
- ${_GCR_NAME}
- --image=gcr.io/$PROJECT_ID/${_GCR_NAME}
Thanks a lot!
You need to have 2 pipelines
The first one create the base image. Like that, you can trigger it everytime that you need to rebuild this base image, with, possibly a different lifecycle than your application lifecycle. Something similar to that
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/<PROJECT_ID>/base-image', '-f', 'DOCKERFILE_ONE', '.']
images: ['gcr.io/<PROJECT_ID>/base-image']
Then, in your second dockerfile, start from the base image and use a second Cloud Build pipeline to build, push and deploy it (as you do in your 3 last steps in your question)
FROM gcr.io/<PROJECT_ID>/base-image
COPY .....
....
...
Not the answer, but as a workaround. If anybody has the same issue, using Python3.8 instead 3.9 worked for Cloud Build.
This what the Dockerfile looks like:
RUN pip install --upgrade pip wheel setuptools
# Install pystan
RUN pip install Cython>=0.22
RUN pip install numpy>=1.7
RUN pip install pystan==2.19.1.1
# Install other prophet dependencies
RUN pip install -r requirements.txt
RUN pip install prophet
Though figuring out how to iteratively build images for CloudRun, would be really great.
Why did your Cloud Build fail with Timeout Error ?
While building images in docker, it is important to keep the image size down. Often multiple dockerfiles are created to handle the image size constraint. In your case, you were not able to reduce the image size and include only what is needed.
What can be done to rectify it ?
As per this documentation, multi-stage builds, (introduced in
Docker 17.05) allows you to build your app in a first "build"
container and use the result in another container, while using the
same Dockerfile.
You use multiple FROM statements in your Dockerfile. Each FROM
instruction can use a different base, and each of them begins a new
stage of the build. You can selectively copy artifacts from one stage
to another, leaving behind everything you don’t want in the final
image. To show how this works, follow this link.
You only need a single Dockerfile.
The result is the same tiny production image as before, with a
significant reduction in complexity. You don’t need to create any
intermediate images and you don’t need to extract any artifacts to
your local system at all.
How does it work?
You can name your build stages. By default, the stages are not
named, and you refer to them by their integer number, starting with 0
for the first FROM instruction. However, you can name your stages, by
adding an AS to the FROM instruction.
When you build your image, you don’t necessarily need to build the
entire Dockerfile including every stage. You can specify a target
build stage.
When using multi-stage builds, you are not limited to copying from
stages you created earlier in your Dockerfile. You can use the
COPY --from instruction to copy from a separate image,either
using the local image name, a tag available locally or on a Docker
registry, or a tag ID.
You can pick up where a previous stage left off by referring
to it when using the FROM directive.
In the Google documentation, there is an example of dockerfile
which uses multi-stage builds. The hello binary is built in a first
container and injected in a second one. Because the second container
is based on scratch, the resulting image contains only the hello
binary and not the source file and object files needed during the
build.
FROM golang:1.10 as builder
WORKDIR /tmp/go
COPY hello.go ./
RUN CGO_ENABLED=0 go build -a -ldflags '-s' -o hello
FROM scratch
CMD [ "/hello" ]
COPY --from=builder /tmp/go/hello /hello
Here is a tutorial to understand how multi staging builds work.
So far I have been using my own build Powershell script to build my code. Why? Because I like to have a binary log when a diag build is requested and have it attached to the build as an artifact.
But now that I use YAML I would like to use more of the standard tasks available with Azure DevOps. Namely the DotNet Build task. But I do not see how can I make it generate the binary log and attach to the build without writing a custom script anyway. I want it to work transparently - triggering a diag build (i.e. System.Debug is true) should do two things:
Pass -bl:TheBinaryLogFilePath to the dotnet build
Attach TheBinaryLogFilePath to the build as an artifact.
Is it possible in a straightforward manner without writing a custom script (otherwise not worth using the standard task anyway)?
You dont have control over what changes when you do a debug build. and this is probably something that won't ever happen automatically because I dont see a reason why\how Microsoft would implement something that alters how my apps are being built.
As for the standard task, you can pass additional arguments to it using the arguments: property.
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/build/dotnet-core-cli?view=azure-devops#yaml-snippet
then you'd have to instruct publish artifacts task to pick up that binary folder path as well. thats it.
if you want to have conditions - fine, use conditions:
- ${{ if eq($(System.Debug), 'true') }}:
- task: DotNetCoreCLI#2
displayName: build
inputs:
command: build
publishWebProjects: false
zipAfterPublish: false
modifyOutputPath: false
projects: xxx
arguments: |
-bl:TheBinaryLogFilePath
Currently our company is creating individual software for B2B customers.
Some applications can be used for multiple customers.
Usually we can host the application in the cloud and deploy everything with Docker.
Running a GitLab pipeline and deploying etc. is fine for that.
Now we got some customers who rely on an external installation.
Since some of them still use Windows Server (2008 tho), I can not install a proper Docker environment on there and we need to install an Apache Tomcat and run the application inside the tomcat.
Question: How to deal with that? I would need a pipeline to create a docker image and a war file.
Simply create two completely independent pipelines?
Handle everything in a single pipeline?
Our current gitlab-ci.yml file for the .war
image: maven:latest
variables:
MAVEN_CLI_OPTS: "-s settings.xml -q -B"
MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository"
cache:
paths:
- .m2/repository/
- target/
stages:
- build
- test
- deploy
build:
stage: build
script:
- mvn $MAVEN_CLI_OPTS compile
test:
stage: test
script:
- mvn $MAVEN_CLI_OPTS test
install:
stage: deploy
script:
- mvn $MAVEN_CLI_OPTS install
artifacts:
name: "datahub-$CI_COMMIT_REF_SLUG"
paths:
- target/*.war
Using to separate delivery pipeline is preferable: you are dealing with two very installation processes, and you need to be sure which one is running for a given client.
Having two separate GitLab pipeline allows for said client to chose the right one.
GitHub's Google Cloud Build integration does not detect a cloudbuild.yaml or Dockerfile if it is not in the root of the repository.
When using a monorepo that contains multiple cloudbuild.yamls, how can GitHub's Google Cloud Build integration be configured to detect the correct cloudbuild.yaml?
File paths:
services/api/cloudbuild.yaml
services/nginx/cloudbuild.yaml
services/websocket/cloudbuild.yaml
Cloud Build integration output:
You can do this by adding a cloudbuild.yaml in the root of your repository with a single gcr.io/cloud-builders/gcloud step. This step should:
Traverse each subdirectory or use find to locate additional cloudbuild.yaml files.
For each found cloudbuild.yaml, fork and submit a build by running gcloud builds submit.
Wait for all the forked gcloud commands to complete.
There's a good example of one way to do this in the root cloudbuild.yaml within the GoogleCloudPlatform/cloud-builders-community repo.
If we strip out the non-essential parts, basically you have something like this:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- '-c'
- |
for d in */; do
config="${d}cloudbuild.yaml"
if [[ ! -f "${config}" ]]; then
continue
fi
echo "Building $d ... "
(
gcloud builds submit $d --config=${config}
) &
done
wait
We are migrating to a mono-repo right now, and I haven't found any CI/CD solution that handles this well.
The key is to not only detect changes, but also any services that depend on that change. Here is what we are doing:
Requiring every service to have a MAKEFILE with a build command.
Putting a cloudbuild.yaml at the root of the mono repo
We then run a custom build step with this little tool (old but still seems to work) https://github.com/jharlap/affected which lists out all packages have changed and all packages that depend on those packages, etc.
then the shell script will run make build on any service that is affected by the change.
So far it is working well, but I totally understand if this doesn't fit your workflow.
Another option many people use is Bazel. Not the most simple tool, but especially great if you have many different languages or build processes across your mono repo.
You can create a build trigger for your repository. When setting up a trigger with cloudbuild.yaml for build configuration, you need to provide the path to the cloudbuild.yaml within the repository.