Google Cloud build conditional step - google-cloud-platform

I have the following cloudbuild.yaml file:
substitutions:
_CLOUDSDK_COMPUTE_ZONE: us-central1-a
_CLOUDSDK_CONTAINER_CLUSTER: $_CLOUDSDK_CONTAINER_CLUSTER
steps:
- name: gcr.io/$PROJECT_ID/sonar-scanner:latest
args:
- '-Dsonar.host.url=https://sonar.test.io'
- '-Dsonar.login=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
- '-Dsonar.projectKey=test-service'
- '-Dsonar.sources=.'
- id: 'build test-service image'
name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/$REPO_NAME/$BRANCH_NAME:$SHORT_SHA', '.']
- id: 'push test-service image'
name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/$REPO_NAME/$BRANCH_NAME:$SHORT_SHA']
- id: 'set test-service image in yamls'
name: 'ubuntu'
args: ['bash','-c','sed -i "s,TEST_SERVICE,gcr.io/$PROJECT_ID/$REPO_NAME/$BRANCH_NAME:$SHORT_SHA," k8s/*.yaml']
- id: kubectl-apply
name: 'gcr.io/cloud-builders/kubectl'
args: ['apply', '-f', 'k8s/']
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_CLOUDSDK_COMPUTE_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_CLOUDSDK_CONTAINER_CLUSTER}'
images: ['gcr.io/$PROJECT_ID/$REPO_NAME/$BRANCH_NAME:$SHORT_SHA']
I would like to make the sonar-scanner step conditional (if we are on the production branch, I want to skip the sonar step; other branches should run that step). I would also like to use the same cloudbuild.yaml across all branches.
Is it possible to do this?

You have 2 solutions
Make 2 triggers, each one with their own configuration. 1 on Prod, 1 on UAT/DEV.
You can script your execution. It's dirty but you keep only 1 CI/CD config file
steps:
- name: gcr.io/$PROJECT_ID/sonar-scanner:latest
entrypoint: 'bash'
args:
- '-c'
- 'if [ $BRANCH_NAME != 'prod' ]; then sonar-scanner -Dsonar.host.url=https://sonar.test.io -Dsonar.login=XXXX -Dsonar.projectKey=test-service -Dsonar.sources=. ; fi'

It is not (yet) possible to create conditional steps in cloud build, as is possible with gitlab-ci for example. What we did is to create multiple projects within GCP. You could create a project for development, staging and production. They are all sourced from the same git repository to keep environments identical to each other. This means they have the same cloudbuild.yaml file.
If you would somehow need to run a particular script only in the development environment, for example, an end-to-end test, you would specify a condition on $BRANCH_NAME or $PROJECT_ID within the build step itself. However, making too much of these conditionals will harm maintainability and your environments won't be an exact mirror of eachother. Nevertheless, here is a simple example:
---
timeout: 300s
steps:
# Branch name conditional
- name: gcr.io/google.com/cloudsdktool/cloud-sdk
entrypoint: bash
args:
- -c
- |
if [[ "$BRANCH_NAME" == "develop" ]]
then
echo "Development stuff only"
elif [[ "$BRANCH_NAME" == "release" ]]
then
echo "Acceptance stuff only"
elif [[ "$BRANCH_NAME" == "main" ]]
then
echo "Production stuff only"
fi
Besides building different projects per environment, I would also recommend building a project per domain or application. This means you have a logical separation between the data stored in the projects. You can then group all the development projects under a folder called development etc. Those folders are part of an organization or even another folder.
This logical grouping is one of the real benefits of using GCP, I find it very convenient. Azure has a somewhat similar structure with resource groups and subscriptions. AWS also has a resource group structure.

Related

Access Environment Variables through vite-react and GCP Cloud Build

I have a React application that is Dockerized and hosted on Google Cloud Build. I have set environment variables on the Cloud Build, but I am unable to access them within my React application. What am I doing wrong and how can I access these environment variables in my React application?
steps:
name: gcr.io/cloud-builders/docker
env:
-"VITE_PUBLIC_KEY={$_VITE_PUBLIC_KEY}",
-"VITE_SERVICE_ID={$_VITE_SERVICE_ID}",
-"VITE_TEMPLATE_ID={$_VITE_TEMPLATE_ID}"
args:
build
'--no-cache'
'-t'
'image_name'
.
'-f'
Dockerfile.prod
name: gcr.io/cloud-builders/docker
args:
push
'image_name'
name: gcr.io/cloud-builders/gcloud
args:
run
deploy
bob
'--image'
'image_name'
'--region'
$_DEPLOY_REGION
'--allow-unauthenticated'
'--platform'
$_PLATFORM
timeout: 600s
this is the yaml file:
I dont have a backend solution, I just want to be able to access 3 enviromnent variables within my application on client side. without declaring a .env file.
Tried declaring the enviroments in Cloud Run as well as declaring in the cloudbuild.yaml file. It works on aws but a different problem arises on aws.
One solution could be to hardcode the environment variables directly into your React code. This is not recommended as it could lead to security vulnerabilities and make it difficult to change the values in the future without re-deploying the entire application. Additionally, it would not be a true solution to accessing the environment variables as they would not be dynamically set at runtime.
There are different options, one and the best (in my opinion) is to store your env variables using the secret manager and then access them through your code (check this GitHub Repo).
. . .
The other option is to access the same secrets that you created before when your pipeline is running, the downside is that you always have to redeploy your pipeline to update the env variables.
This is an example:
steps:
# STEP 0 - BUILD CONTAINER 1
- id: Build-container-image-container-one
name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args:
- '-c'
- |
docker build -t gcr.io/$PROJECT_ID/container_one -f 'build/container_one.Dockerfile' .
# STEP 2 - PUSH CONTAINER 1
- id: Push-to-Container-Registry-container-one
name: 'gcr.io/cloud-builders/docker'
args:
- push
- gcr.io/$PROJECT_ID/container_one
waitFor: ["Build-container-image-container-one"]
# STEP 3 - DEPLOY CONTAINER 1
- id: Deploy-Cloud-Run-container-one
name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
- run
- deploy
- container_one
- --image=gcr.io/$PROJECT_ID/container_one
- --region={YOUR REGION}
- --port={YOUR PORT}
- --memory=3Gi
- --cpu=1
waitFor: ["Build-container-image-container-one", "Push-to-Container-Registry-container-one"]
# STEP 4 - ENV VARIABLES
- id: Accessing-secrets-for-env-variables
name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- '-c'
- |
gcloud secrets versions access latest --secret=ENV_VARIABLE_ONE > key1.txt
gcloud secrets versions access latest --secret=ENV_VARIABLE_TWO > key2.txt
gcloud secrets versions access latest --secret=ENV_VARIABLE_THREE > key3.txt
waitFor: ["Push-to-Container-Registry-container-one", "Build-container-image-container-one"]
# STEP 5 - SETTING KEYS
- id: Setting-keys
name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args: ['-c', 'gcloud run services update container_one --region={YOUR REGION} --set-env-vars="ENV_VARIABLE_ONE=$(cat key1.txt), ENV_VARIABLE_TWO=$(cat key2.txt), ENV_VARIABLE_THREE=$(cat key3.txt)"']
images:
- gcr.io/$PROJECT_ID/container_one

Filter build steps by branch google cloud plateform

I have the below steps
steps:
# This step show the version of Gradle
- id: Gradle Install
name: gradle:7.4.2-jdk17-alpine
entrypoint: gradle
args: ["--version"]
# This step build the gradle application
- id: Build
name: gradle:7.4.2-jdk17-alpine
entrypoint: gradle
args: ["build"]
# This step run test
- id: Publish
name: gradle:7.4.2-jdk17-alpine
entrypoint: gradle
args: ["publish"]
The last step I want to do only on MASTER branch
Found one link related to this https://github.com/GoogleCloudPlatform/cloud-builders/issues/138
Its using a bash command, how can I put the gradle command inside the bash.
Update
After the suggestion answer I have updated the steps as
- id: Publish
name: gradle:7.4.2-jdk17-alpine
entrypoint: "bash"
args:
- "-c"
- |
[[ "$BRANCH_NAME" == "develop" ]] && gradle publish
The build pipeline failed with below exception
Starting Step #2 - "Publish"
Step #2 - "Publish": Already have image: gradle:7.4.2-jdk17-alpine
Finished Step #2 - "Publish"
ERROR
ERROR: build step 2 "gradle:7.4.2-jdk17-alpine" failed: starting step container failed: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "bash": executable file not found in $PATH: unknown
The suggested solution didn't work for me, I have to write the below code
# This step run test
- id: Publish
name: gradle:7.4.2-jdk17-alpine
entrypoint: "sh"
args:
- -c
- |
if [ $BRANCH_NAME == 'master' ]
then
echo "Branch is = $BRANCH_NAME"
gradle publish
fi
Current workarounds are mentioned as following :
Using different cloudbuild.yaml files for each branch
Overriding entrypoint and injecting bash as mentioned in the link:
steps:
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args:
- '-c'
- |
echo "Here's a convenient pattern to use for embedding shell scripts in cloudbuild.yaml."
echo "This step only pushes an image if this build was triggered by a push to master."
[[ "$BRANCH_NAME" == "master" ]] && docker push gcr.io/$PROJECT_ID/image
This tutorial outlines an alternative where you check in different cloudbuild.yaml files on different development branches.
You can try the following for Gradle command as mentioned by bhito:
- id: Publish
name: gradle:7.4.2-jdk17-alpine
entrypoint: sh
args: - c
- |
[[ "$BRANCH_NAME" == "master" ]] && gradle publish
Cloud build provides configuring triggers by branch, tag, and pr . This lets you define different build configs to use for different repo events, e.g. one for prs, another for deploying to prod, etc.you can refer to the documentation on how to create and manage triggers.
you can check this blog for more updates on additional features and can go through the release notes for more Cloud build updates.
To gain some more insights on Gradle, you can refer to the link

GCP CloudBuild Logs

How can I add a custom message to Cloud Build logs?
I've tried using the bash entrypoint with the Docker builder (for example) and echoing some strings, but they don't appear in the build logs. Is there a way to achieve this?
Make sure that the builder image you're using has bash in it. I tested this code and replaced the gcloud builder with docker and it is working fine. Here's an example code:
steps:
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args:
- '-eEuo'
- 'pipefail'
- '-c'
- |-
if (( $(date '+%-e') % 2 )); then
echo "today is an odd day"
else
echo "today is an odd day, with an even number"
fi
And here's the Logs:

What represents a STEP in Cloud Build

Actually I am working on a pipeline, all good until there because it has worked. But when I want to explain it is not clear to me what each step represents physically, for example a step "could" be a node within a cluster.Please, if someone has a clear explanation of it, explain it to us.
Example 1 of a step
File config cloud build:
steps:
- name: "gcr.io/google.com/cloudsdktool/cloud-sdk"
args: ["bin/deploy-dags-composer.sh"]
env:
- 'COMPOSER_BUCKET=${_COMPOSER_BUCKET}'
- 'ARTIFACTS_BUCKET=${_ARTIFACTS_BUCKET}'
id: 'DEPLOY-DAGS-PLUGINS-DEPENDS-STEP'
Bash File
#! /bin/bash
gsutil cp bin/plugins/* gs://$COMPOSER_BUCKET/plugins/
gsutil cp bin/dependencies/* gs://$ARTIFACTS_BUCKET/dags/dependencies/
gsutil cp bin/dags/* gs://$COMPOSER_BUCKET/dags/
gsutil cp bin/sql-scripts/* gs://$ARTIFACTS_BUCKET/path/bin/sql-scripts/composer/
Example 2 several steps
File config cloud build
steps:
- name: gcr.io/cloud-builders/gsutil
args: ['cp', '*', 'gs://${_COMPOSER_BUCKET}/plugins/']
dir: 'bin/plugins/'
id: 'deploy-plugins-step'
- name: gcr.io/cloud-builders/gsutil
args: ['cp', '*', 'gs://${_ARTIFACTS_BUCKET}/dags/dependencies/']
dir: 'bin/dependencies/'
id: 'deploy-dependencies-step'
- name: gcr.io/cloud-builders/gsutil
args: ['cp', '*', 'gs://${_COMPOSER_BUCKET}/dags/']
dir: 'bin/dags/'
id: 'deploy-dags-step'
- name: gcr.io/cloud-builders/gsutil
args: ['cp', '*', 'gs://${_ARTIFACTS_BUCKET}/projects/path/bin/sql-scripts/composer/']
dir: 'bin/sql-scripts/'
id: 'deploy-scripts-step'
In Cloud Build, a step is a stage of the processing. This stage is described by a container to load, containing the required binaries for the processing to perform in the stage.
To this container, you can define an entrypoint, the binary to run in side the container, and args to pass to it.
You have also several option that you can see here.
An important concept to understand is that ONLY the /workspace directory is kept from one step to another one. At the end of each step, the container is offloaded and you lost all the data in memory (like environment variable) and the data stored outside of the /workspace directory (such as system dependency installation). Keep this in mind, many of issues come from there.
EDIT 1:
In a step, you can, out of the box, run 1 command on one container. gsutil, gcloud, mvn, node,.... All depends on your container and your entrypoint.
But there is a useful advance capacity, when you need to run many commands on the same container. It can occur for many reason. You can do such like this
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- -c
- |
MY_ENV_VAR=$(curl -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance)
echo $${MY_ENV_VAR} > env-var-save.file
# Continue the commands that you need/want ....

env step parameter in cloudbuild.yaml file not settings environment variable

My cloudbuild.yaml file looks like
steps:
# build the container image
- name: "gcr.io/cloud-builders/docker"
args: ["build", "-t", "gcr.io/$PROJECT_ID/backend:$COMMIT_SHA", "."]
env:
- "APP_ENV=production"
# push the container image to Container Registry
- name: "gcr.io/cloud-builders/docker"
args: ["push", "gcr.io/$PROJECT_ID/backend:$COMMIT_SHA"]
# Deploy container image to Cloud Run
- name: "gcr.io/cloud-builders/gcloud"
args:
- "run"
- "deploy"
- "backend"
- "--image"
- "gcr.io/$PROJECT_ID/backend:$COMMIT_SHA"
- "--region"
- "us-central1"
- "--platform"
- "managed"
images:
- "gcr.io/$PROJECT_ID/backend:$COMMIT_SHA"
and it builds and deploys a new container to Cloud Run, however it doesn't set the APP_ENV environment variable to "production". Why is that and how do I get it to?
I am following this guide.
steps:
- env: [...]
approach sets environment variables for the Cloud Build container that runs the docker build -t command, so in this case only docker build it executes gets APP_ENV variable (and probably doesn't do anything with it).
You should not expect this to set environment variable for Cloud Run. For that to work, you need to specify --set-env-vars or --update-env-vars to Cloud Run in the gcloud run deploy step by specifying additional args above like:
- name: "gcr.io/cloud-builders/gcloud"
args:
- "run"
- "deploy"
...
- "--set-env-vars=KEY1=VALUE1"
- "--set-env-vars=KEY2=VALUE2"
...
See https://cloud.google.com/run/docs/configuring/environment-variables#command-line to learn more or read this article about alternative ways of specifying environment variables for Cloud Run applications.