Can I build app in CodeBuild only once, and then run parallel Cypress tests on it using a build-matrix? - amazon-web-services

I have been following this official documentation on how to get parallel builds running in AWS CodeBuild using a batch matrix. Right now my buildspec.yml is structured like this:
## buildspec.yml
version: 0.2
batch:
fast-fail: false
build-matrix:
dynamic:
env:
variables:
INSTANCES:
- A
WORKERS:
- 1
- 2
phases:
install:
commands:
- npm ci
build:
commands:
- npx cypress run <params>
In this example we run two parallel workers, though IRL we run 11.
This works well for one use case, where we check out the code and run the Cypress tests against the pre-defined URL of one of our test environments. However, we have another use-case where we need to build the application within the CodeBuild container, start a server on localhost, and then run the Cypress tests against that.
One option, of course, is just to build the app 11 times. However, since CodeBuild pricing is by the machine minute, I'd rather build once instead of 11 times. I also don't like the idea of technically testing 11 different builds (albeit all built off the same commit).
What I'm looking for is behavior similar to Docker's multi-stage build functionality, where you can build the app once in one environment, and then copy that artifact to 11 separate envs, where the parallel tests will then run. Is functionality like this going to be possible within CodeBuild itself, or will I have to do something like have two CodeBuild builds and upload the artifact to S3? Any and all ideas welcome.

Related

How to acquire binary log with the standard Azure Devops DotNet build task and have it attached to the build as an artifact with YAML build?

So far I have been using my own build Powershell script to build my code. Why? Because I like to have a binary log when a diag build is requested and have it attached to the build as an artifact.
But now that I use YAML I would like to use more of the standard tasks available with Azure DevOps. Namely the DotNet Build task. But I do not see how can I make it generate the binary log and attach to the build without writing a custom script anyway. I want it to work transparently - triggering a diag build (i.e. System.Debug is true) should do two things:
Pass -bl:TheBinaryLogFilePath to the dotnet build
Attach TheBinaryLogFilePath to the build as an artifact.
Is it possible in a straightforward manner without writing a custom script (otherwise not worth using the standard task anyway)?
You dont have control over what changes when you do a debug build. and this is probably something that won't ever happen automatically because I dont see a reason why\how Microsoft would implement something that alters how my apps are being built.
As for the standard task, you can pass additional arguments to it using the arguments: property.
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/build/dotnet-core-cli?view=azure-devops#yaml-snippet
then you'd have to instruct publish artifacts task to pick up that binary folder path as well. thats it.
if you want to have conditions - fine, use conditions:
- ${{ if eq($(System.Debug), 'true') }}:
- task: DotNetCoreCLI#2
displayName: build
inputs:
command: build
publishWebProjects: false
zipAfterPublish: false
modifyOutputPath: false
projects: xxx
arguments: |
-bl:TheBinaryLogFilePath

How can I improve jenkins performance with aws codebuild to build big java artifacts and docker images?

Our Jenkins is setup in aws and we did not manage to use slaves. Since the platform is big and some artifacts contain many others, our jenkins comes to his limits when multiple developers commit to different repositories and it is forced to run multiple jobs at the same time.
The aim is to:
- Stay with jenkins since our processes are documented based on it and we use many plugins e.g. test result summary and github integration
- Run jobs in codebuild and get feedback in jenkins to improve the performance
Are there best practices for this?
We did the following steps to build big artifacts outside of jenkins:
- Install jenkins codebuild plugin
- Create jenkins pipeline
- Store settings.xml for maven build in s3
- Store access in system manager parameters to use in codebuild and maven
Create codebuild project with the necessary permissions and following functionality:
-- Get settings.xml from s3
-- run maven with the necessary access data
-- store tests results in s3
Create jenkinsfile whith following functionality:
-- get commitID and run codebuild with it
-- get generated files of test results from s3 and pass it to jenkins
-- delete generated files from s3
-- pass files to jenkins to show test results
With this approach we managed to reduce the runtime to 5 mins.
We next challenge we had was to build and angular application on top of a java microservice, create a docker image and push it to different environments. This jobs was running around 25 mins in jenkins.
We did the following steps to build the docker images outside of jenkins:
- Install jenkins codebuild plugin
- Create jenkins pipeline
- Store settings.xml for maven build in s3
- Store access in system manager parameters to use in codebuild and maven
Create codebuild project with the necessary permissions and following functionality:
-- Get settings.xml from s3
-- login into ecr in all environments
-- build the angular app
-- build the java app
-- copy necessary files for docker build
-- build docker image
-- push to all envoronments
Create jenkinsfile whith following functionality:
-- get branch names of both repositories to build the docker image from
-- get branch latest commitID
-- call the codebuild projects with both commitIDs (notice that the main repository will need the buildspec)
With this approach we managed to reduce the runtime to 5 mins.
Sample code in: https://github.com/felipeloha/samples/tree/master/jenkins-codebuild

How to serve a Java application as Docker container and .war file?

Currently our company is creating individual software for B2B customers.
Some applications can be used for multiple customers.
Usually we can host the application in the cloud and deploy everything with Docker.
Running a GitLab pipeline and deploying etc. is fine for that.
Now we got some customers who rely on an external installation.
Since some of them still use Windows Server (2008 tho), I can not install a proper Docker environment on there and we need to install an Apache Tomcat and run the application inside the tomcat.
Question: How to deal with that? I would need a pipeline to create a docker image and a war file.
Simply create two completely independent pipelines?
Handle everything in a single pipeline?
Our current gitlab-ci.yml file for the .war
image: maven:latest
variables:
MAVEN_CLI_OPTS: "-s settings.xml -q -B"
MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository"
cache:
paths:
- .m2/repository/
- target/
stages:
- build
- test
- deploy
build:
stage: build
script:
- mvn $MAVEN_CLI_OPTS compile
test:
stage: test
script:
- mvn $MAVEN_CLI_OPTS test
install:
stage: deploy
script:
- mvn $MAVEN_CLI_OPTS install
artifacts:
name: "datahub-$CI_COMMIT_REF_SLUG"
paths:
- target/*.war
Using to separate delivery pipeline is preferable: you are dealing with two very installation processes, and you need to be sure which one is running for a given client.
Having two separate GitLab pipeline allows for said client to chose the right one.

GitHub Cloud Build Integration with multiple cloudbuild.yamls in monorepo

GitHub's Google Cloud Build integration does not detect a cloudbuild.yaml or Dockerfile if it is not in the root of the repository.
When using a monorepo that contains multiple cloudbuild.yamls, how can GitHub's Google Cloud Build integration be configured to detect the correct cloudbuild.yaml?
File paths:
services/api/cloudbuild.yaml
services/nginx/cloudbuild.yaml
services/websocket/cloudbuild.yaml
Cloud Build integration output:
You can do this by adding a cloudbuild.yaml in the root of your repository with a single gcr.io/cloud-builders/gcloud step. This step should:
Traverse each subdirectory or use find to locate additional cloudbuild.yaml files.
For each found cloudbuild.yaml, fork and submit a build by running gcloud builds submit.
Wait for all the forked gcloud commands to complete.
There's a good example of one way to do this in the root cloudbuild.yaml within the GoogleCloudPlatform/cloud-builders-community repo.
If we strip out the non-essential parts, basically you have something like this:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- '-c'
- |
for d in */; do
config="${d}cloudbuild.yaml"
if [[ ! -f "${config}" ]]; then
continue
fi
echo "Building $d ... "
(
gcloud builds submit $d --config=${config}
) &
done
wait
We are migrating to a mono-repo right now, and I haven't found any CI/CD solution that handles this well.
The key is to not only detect changes, but also any services that depend on that change. Here is what we are doing:
Requiring every service to have a MAKEFILE with a build command.
Putting a cloudbuild.yaml at the root of the mono repo
We then run a custom build step with this little tool (old but still seems to work) https://github.com/jharlap/affected which lists out all packages have changed and all packages that depend on those packages, etc.
then the shell script will run make build on any service that is affected by the change.
So far it is working well, but I totally understand if this doesn't fit your workflow.
Another option many people use is Bazel. Not the most simple tool, but especially great if you have many different languages or build processes across your mono repo.
You can create a build trigger for your repository. When setting up a trigger with cloudbuild.yaml for build configuration, you need to provide the path to the cloudbuild.yaml within the repository.

Use matrix build in Travis only on deploy

Is there any way to only run a matrix build in travis on deploy? Right now we use the same .travis.yml file for test and deploy, and a matrix build (and thus two workers) is triggered in both cases. I can't find a way to only run the build as a matrix in the case in which we are deploying and not when we are running tests (or perhaps to only use a matrix during the deploy process). The main reason I'd like to do this is so that I don't trigger extra builds when PRs are created and I just need the test build to run.
I also couldn't find a simple way we could run a single build for npm install/npm test and then spin off two separate workers/a matrix for the "deploy" process, which would also solve the problem.
Here's a snip of my current .travis.yml file:
language: node_js
node_js: 4.2.1
env:
global:
- APP_NAME=example
matrix:
- CF_DOMAIN=example1.net CF_TARGET=https://target1.com APP_NAME=${APP_NAME}-1
- CF_DOMAIN=example2.net CF_TARGET=https://target2.com APP_NAME=${APP_NAME}-2
branches:
only:
- master
deploy:
- provider: script
skip_cleanup: true
script: node_modules/.bin/deploy.sh
on:
branch: master
It might also work for us to only run a matrix build on a push hook, but not on a pr.
There was a similar issue posted in GitHub for Travis. It was suggested to use two separate .travis.yml files.
https://github.com/travis-ci/travis-ci/issues/2778