Can I define a combination of steps in a cloudbuild.yaml? - google-cloud-platform

In a GitHub Workflow I can define a strategy key and loop through all combinations of a matrix. Here is an example for a CI pipeline of a Node.js app.
name: CI
on:
pull_request:
jobs:
test:
strategy:
matrix:
node: [16, 14]
os: [ubuntu-latest, macos-latest, windows-latest]
runs-on: ${{ matrix.os }}
name: Test node#${{ matrix.node }} on ${{ matrix.os }}
steps:
- uses: actions/checkout#v3
- uses: actions/setup-node#v3
with:
node-version: ${{ matrix.node }}
- run: npm ci
- run: npm test
Can I achieve the same thing in a cloudbuild.yaml file? I haven't found any mention of this looping functionality in the documentation regarding the Build configuration file schema.
I guess I could achieve what I want using user-defined substitutions and calling the same Cloud Build config file multiple times, passing different substitutions each time... but I was wondering if this is the only possible approach. I would rather have all configuration defined in that single cloudbuild.yaml.

Related

How to update CI/CD to use one ROS package(repository) instead of using independent repos?

Currently my ROS package doesn't have functionality to install some own ros_driver, own ros_common and node_registry. So I have to update CI/CD to use use one ROS package(repository) which I call it "system_integration" instead of using those own independent repos (own ros_driver, own ros_common and node_registry). For example here it shows all places where own_ros_driver is used in the repo. Same can be used to look for system_integration. This is the cd.yml file
name: Clone ROS Driver
uses: actions/checkout#v2
with:
repository: /own_ros_driver
path: own_ros_driver
ssh-key: ${{ secrets.DETRABOT_SECRET}}
clean: true
ref: 'v1.0.1'
and the ci.yml
repository: own_ros_driver
path: deepx_ros_driver
ssh-key: ${{ secrets.DETRABOT_SECRET}}
ssh_key: ${{ secrets.DETRABOT_SECRET }}
- name: Downloading with VCS in deepx_ros_driver
uses: ./actions/vcstool-checkout
So any help how can I update CI/CD to use one ROS package(repository) instead of using independent repos ?

Can't push Docker manifest file to AWS ECR via GitHub actions, but works via CLI

I have what I would consider a relatively simple GitHub workflow file:
create_manifest_docker_files:
# needs: [build_amd64_dockerfile, build_arm64_dockerfile]
env:
IMAGE_REPO: ${{ secrets.aws_ecr_image_repo }}
AWS_ACCOUNT_ID: ${{ secrets.aws_account_id }}
AWS_REGION: ${{ secrets.aws_region }}
AWS_ACCESS_KEY_ID: ${{ secrets.aws_access_key_id }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.aws_secret_access_key }}
runs-on: self-hosted
steps:
- uses: actions/checkout#v2
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Create docker manifest
run: docker manifest create $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO:latest --amend $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO:latest-amd64 --amend $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO:latest-arm64
- name: Push the new manifest file to Amazon ECR
run: docker manifest push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO
Whenever this workflow runs via GitHub Actions, I see the following error:
Run docker manifest push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO
docker manifest push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO
shell: /usr/bin/bash -e {0}
env:
IMAGE_REPO: ***
AWS_ACCOUNT_ID: ***
AWS_REGION: ***
AWS_ACCESS_KEY_ID: ***
AWS_SECRET_ACCESS_KEY: ***
failed to put manifest ***.dkr.ecr.***.amazonaws.com/***:latest: manifest blob unknown: Images with digests '[sha256:a1a4efe0c3d0e7e26398e522e14037acb659be47059cb92be119d176750d3b56, sha256:5d1b00451c1cbf910910d951896f45b69a9186c16e5a92aab74dcc5dc6944c60]' required for pushing image into repository with name '***' in the registry with id '***' do not exist
Error: Process completed with exit code 1.
I'm not quite sure I actually understand the problem here. The previous step, "Create docker manifest" completes successfully with no problem, but the "Push the new manifest file to AWS ECR" step fails with the error above.
When looking in AWS ECR, I only have two images -- latest-amd64 and latest-arm64. Neither of their Digests are the values that the error message above is putting out.
When exporting those same environment variables to my CLI session and running those commands manually, everything works fine:
root#github-runner:/home/ubuntu/docker-runner# docker manifest create $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO:latest --amend $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO:latest-amd64 --amend $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO:latest-arm64
Created manifest list [obfuscated-from-stackoverflow].dkr.ecr.us-east-1.amazonaws.com/[obfuscated-from-stackoverflow]:latest
root#github-runner:/home/ubuntu/docker-runner# docker manifest push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$IMAGE_REPO
sha256:e4b5cc4cfafca560724fa6c6a5f41a2720a4ccfd3a9d18f90c3091866061a88d
My question is -- why would this work from the CLI itself but not from the GitHub Actions workflow? I have some previous runs that show this working perfectly fine with the workflow contents above, but now it's failing for some reason. Not quite sure if the issue here is within my ECR repository or if it's something locally messed up on the GitHub runner.

Is it possible to allow dependabot on GitHub to automatically "bump" software to new version?

Please help this learner out: I get frequent GitHub's dependabot alerts for "bumping" software versions to a more current one. My issue is I have to go into each (in my case, Django) app to pull or merge files. It tedious and time consuming to deal with my limited number of apps. How do professionals manage the process?
Is there a way to allow GitHub just bump whatever needs to be bumped (assuming one doesn't mind apps being broken)?
Yes. You can use Github actions to do this. See the following blog post: Setting up Dependabot with GitHub actions to approve and merge
The code, the way it is now written, will only automatically merge minor and patch version changes. It will not merge major version changes, which are potentially breaking changes. You could remove that check, but it is not normally recommended.
You also need to change the following settings on your repo:
Settings -> Actions -> General -> check "Allow Github Actions to create and approve pull requests.
Settings -> General -> Pull Requests -> check "Allow auto-merge".
The contents of the Github workflow file, "dependabot-approve-and-auto-merge.yml", is:
name: Dependabot Pull Request Approve and Merge
on: pull_request_target
permissions:
pull-requests: write
contents: write
jobs:
dependabot:
runs-on: ubuntu-latest
# Checking the actor will prevent your Action run failing on non-Dependabot
# PRs but also ensures that it only does work for Dependabot PRs.
if: ${{ github.actor == 'dependabot[bot]' }}
steps:
# This first step will fail if there's no metadata and so the approval
# will not occur.
- name: Dependabot metadata
id: dependabot-metadata
uses: dependabot/fetch-metadata#v1.1.1
with:
github-token: "${{ secrets.GITHUB_TOKEN }}"
# Here the PR gets approved.
- name: Approve a PR
run: gh pr review --approve "$PR_URL"
env:
PR_URL: ${{ github.event.pull_request.html_url }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Finally, this sets the PR to allow auto-merging for patch and minor
# updates if all checks pass
- name: Enable auto-merge for Dependabot PRs
if: ${{ steps.dependabot-metadata.outputs.update-type != 'version-update:semver-major' }}
run: gh pr merge --auto --squash "$PR_URL"
env:
PR_URL: ${{ github.event.pull_request.html_url }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

Pass Variable From Github Action to Docker image build

I've been working on setting up a Github Actions workflow to build a docker image. I need to pass environment variables into the image so that my Django project will run correctly. Unfortunately, when I build the image it doesn't receive the values of the variables.
The relevant part of my workflow file:
- name: Build, tag, and push image to AWS ECR
id: build-image
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
IMAGE_TAG: ${{ github.sha }}
aws_ses_access_key_id: ${{ secrets.AWS_SES_ACCESS_KEY_ID }}
aws_ses_secret_access_key: ${{ secrets.AWS_SES_SECRET_ACCESS_KEY }}
DATABASE_ENGINE: ${{ secrets.DATABASE_ENGINE }}
db_host: ${{ secrets.DB_HOST }}
db_password: ${{ secrets.DB_PASSWORD }}
db_port: ${{ secrets.DB_PORT }}
db_username: ${{ secrets.DB_USERNAME }}
django_secret_key: ${{ secrets.DJANGO_SECRET_KEY }}
fcm_server_key: ${{ secrets.FCM_SERVER_KEY }}
run: |
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
echo "image=$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG" >> $GITHUB_ENV
In my Dockerfile, I've put the following:
ENV aws_ses_access_key_id=$aws_ses_access_key_id aws_ses_secret_access_key=$aws_ses_secret_access_key DATABASE_ENGINE=$DATABASE_ENGINE db_host=$db_host db_password=$db_password db_port=$db_port db_username=$db_username django_secret_key=$django_secret_key fcm_server_key=$fcm_server_key
None of the variables are passing. I've tried using $variable_name and ${variable_name} with no luck. What am I doing wrong?
Using dollar substitution in the value of an ENV instruction in the Dockerfile does not expand to an environment variable of the host on which docker build is called, but instead is replaced with a Docker ARG value, that you can pass via the --build-arg ARG_NAME=ARG_VALUE command line argument to the docker build command and then you can replace the value of ARG_NAME as $ARG_NAME to ARG_VALUE in your ENV instruction.
See: https://docs.docker.com/engine/reference/commandline/build/#set-build-time-variables---build-arg
You can use ENV instructions in a Dockerfile to define variable values. These values persist in the built image. However, often persistence is not what you want. Users want to specify variables differently depending on which host they build an image on.
A good example is http_proxy or source versions for pulling intermediate files. The ARG instruction lets Dockerfile authors define values that users can set at build-time using the --build-arg flag
This flag allows you to pass the build-time variables that are accessed like regular environment variables in the RUN instruction of the Dockerfile. Also, these values don’t persist in the intermediate or final images like ENV values do. You must add --build-arg for each build argument.

github pages issue when using github actions and github-pages-deploy-action?

I have simple github repo where I host the content of my CV. I use hackmyresume to generate the index.html. I'm using Github Actions to run the npm build and it should publish the generated content to the gh-pages branch.
My workflow file has
on:
push:
branches:
- master
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: Deploy with github-pages
uses: JamesIves/github-pages-deploy-action#master
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
BASE_BRANCH: master # The branch the action should deploy from.
BRANCH: gh-pages # The branch the action should deploy to.
FOLDER: target # The folder the action should deploy.
BUILD_SCRIPT: npm install && npm run-script build
And the build command is
"build": "hackmyresume BUILD ./src/main/resources/json/fresh/resume.json target/index.html -t compact",
I can see the generated html file getting committed to the github branch
https://github.com/emeraldjava/emeraldjava/blob/gh-pages/index.html
but the gh-page doesn't pick this up? I get a 404 error when i hit
https://emeraldjava.github.io/emeraldjava/
I believe my repo setting and secrets are correct but I must be missing something small. Any help would be appreciated.
This is happening because of your use of the GITHUB_TOKEN variable. There's an open issue with GitHub due to the fact that the built in token doesn't trigger the GitHub Pages deploy job. This means you'll see the files get committed correctly, but they won't be visible.
To get around this you can use a GitHub access token. You can learn how to generate one here. It needs to be correctly scoped so it has permission to push to a public repository. You'd store this token in your repository's Settings > Secrets menu (Call it something like ACCESS_TOKEN), and then reference it in your configuration like so:
on:
push:
branches:
- master
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: Deploy with github-pages
uses: JamesIves/github-pages-deploy-action#master
env:
ACCESS_TOKEN: ${{ secrets.ACCESS_TOKEN }}
BASE_BRANCH: master # The branch the action should deploy from.
BRANCH: gh-pages # The branch the action should deploy to.
FOLDER: target # The folder the action should deploy.
BUILD_SCRIPT: npm install && npm run-script build
You can find an outline of these variables here. Using an access token will allow the GitHub Pages job to trigger when a new deployment is made. I hope that helps!