Set cloud build default substitution variables through a shell script - google-cloud-platform

I have shell script that I use in order to be able to create my resources on Google Cloud Platform.
It looks something like this:
REGION=us-east1
# Create buckets
FILES_SOURCE=${DEVSHELL_PROJECT_ID}-source-$(date +%s)
gsutil mb -c regional -l ${REGION} gs://${FILES_SOURCE}
FUNCTIONS_BUCKET=${DEVSHELL_PROJECT_ID}-functions-$(date +%s)
gsutil mb -c regional -l ${REGION} gs://${FUNCTIONS_BUCKET}
I also have a Cloud Build enabled for my project with a trigger defined inside of it. Some of the values for my substitution variables should be equal to FILES_SOURCE and FUNCTIONS_BUCKET from the script above. If I have my Cloud Build enabled prior to the execution of my shell script, is it possible to somehow assign those values (and their keys) from the shell script?
I can see that we have gcloud builds interface but it doesn't seem to have such options.

You must be referring to user-defined substitution variables because default substitutions are automatically defined to you by Cloud Build. With regards to gcloud builds interface, you can set --substitutions flag to specify your user-defined variables but looking at your example, it seems that those aren't fixed.
Unfortunately you won't be able to specify user-defined substitution variables if the values came from a shell script. However, there's a workaround so that your shell script variables will persist the entire build steps by saving the values on a file and then read it as you require.
You've not specified how you intend to use the variables but here's an example:
build.sh
REGION=us-east1
DEVSHELL_PROJECT_ID=sample-proj
FUNCTIONS_BUCKET=${DEVSHELL_PROJECT_ID}-functions-$(date +%s)
FILES_SOURCE=${DEVSHELL_PROJECT_ID}-source-$(date +%s)
# Store variables on a file
echo $FUNCTIONS_BUCKET > /workspace/functions-bucket &&
echo $FILES_SOURCE > /workspace/files-source
echo "Saved values."
cloudbuild.yaml
steps:
- id: "Read script and store values"
name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args: ['./build.sh']
- id: "Read Values"
name: gcr.io/cloud-builders/gcloud
entrypoint: 'bash'
args:
- -c
- |
# Read from "/workspace"
echo "First we saved " $(cat /workspace/functions-bucket) &&
echo "Then we saved " $(cat /workspace/files-source)
Note: We used /workspace because Cloud Build uses it as a working directory by default.
Reference: https://medium.com/google-cloud/how-to-pass-data-between-cloud-build-steps-de5c9ebc4cdd

You can't override the substitution variables during the Cloud Build process. So, you have 2 solutions
Either you work with "Linux" variable, and the answer of Donnald is the right solution (you have to read the value from file in each step and then use it)
Or you can call a Cloud Build in Cloud Build. Like this
Create the Cloud Build file for your core build, with substitution variables
steps:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- -c
- |
echo $_FUNCTIONS_BUCKET
echo $_FILES_SOURCE
...
substitutions:
_FUNCTIONS_BUCKET:
_FILES_SOURCE:
Then, Create a the file for initialization: cloudbuild-init.yaml
steps:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- -c
- |
REGION=us-east1
DEVSHELL_PROJECT_ID=sample-proj
FUNCTIONS_BUCKET=$${DEVSHELL_PROJECT_ID}-functions-$$(date +%s)
FILES_SOURCE=$${DEVSHELL_PROJECT_ID}-source-$$(date +%s)
gcloud builds submit --async --substitutions=_FUNCTIONS_BUCKET=$${FUNCTIONS_BUCKET},_FILES_SOURCE=$${FILES_SOURCE}
You can note the async to not wait the end of the underlying Cloud Build before finishing the init one. Else you will pay twice the cost of the Build. On the other hand, you won't know in the trigger if your job worked or not.
It's matter of tradeoffs here.

Related

What does "gcloud builds submit ... " do?

I would like to know what gcloud builds submit does. In my case I am running the GCloud run tutorial.
The official documentation states that it submits a build. This is not a particularly helpful piece of information.
Can someone provide some more context to this?
What is a build? An Image? A jar file? Where is this 'build' being submitted to?
What does 'submitting' mean? Does this 'submit' process push my 'build' over the network.
When I run gcloud builds submit it also seems to be creating a docker image. So this is also creating the build, and then it is submitting it ?!!??
There are several steps that's happening when you run the gcloud builds submit command:
Compresses your application code, Dockerfile, and any other assets in the current directory as indicated by .;
Uploads the files to a Cloud Storage bucket (there's a default bucket but you're free to specify a bucket on your build config);
Initiates a build using the uploaded files as input;
Tags the image using the provided name; and
Pushes the built image to Container Registry.
On your case, a build is a docker container that is pushed/submitted into the Container Registry. Once it's submitted, you'll be able deploy that container on Cloud Run just as specified on the docs you've provided.
Cloud Build is a service that applies one or more container images in series to some initial set of input files and often generating some artifacts, often (not always) another container image, often some source code that was initially submitted that the service builds into a container image.
Cloud Build is somewhat analogous to e.g. Linux pipelines where some input is transformed by piping data through a series of commands: f | g | h | .... Alternatively you may think of it as composited functions: h(g(f(x))).
Cloud Build is described (and named) as a service to build (code into containers) but, as you know, actually the steps may be any container image and often these have side-effects such as deploying container images to other services e.g. Cloud Run.
Cloud Build is much more general-purpose than Google advertises it. Google limits its scope in its documentation to a cloud-based service to build software.
When you run gcloud builds submit... you provide some source code and either a Dockerfile or a configuration file. The former is a simple case of the second, a configuration file containing a single step that runs docker build....
Configuration files (YAML) list a series of container images with parameters that are run in series. Initially Cloud Build copies a designated source (can be the current directory) to a Compute Engine VM (created by the service) as a directory (that's automatically mounted into each container) as /workspace.
Containers (defined as steps in the configuration file) may operate on this file system (e.g. compline code, validate files, anything that you can do in a container). Often, in conclusion, config files store containers that have been created in e.g. Container Registry.
Solving Quadratic equations with Cloud Build
Cloud Build can be confusing to newcomers. In a spirit of fun and as a way to show that Cloud Build is quite general-purpose, here's a Rube Goldberg machine written in Cloud Build that solves quadratic equations:
For the following cloudbuild.yaml:
steps:
- name: busybox
args:
- ash
- -c
- 'echo "Quadratic: $(cat a)x²+$(cat b)x+$(cat c)s=0"'
- name: busybox
args:
- ash
- -c
- 'echo "$(cat b) * $(cat b)" | bc -l > b2'
- name: busybox
args:
- ash
- -c
- 'echo "4 * $(cat a) * $(cat c)" | bc -l > 4ac'
- name: busybox
args:
- ash
- -c
- 'echo "$(cat b2) - $(cat 4ac)" | bc -l > b2-4ac'
- name: busybox
args:
- ash
- -c
- 'echo "sqrt($(cat b2-4ac))" | bc -l > sqrt'
- name: busybox
args:
- ash
- -c
- 'echo "-($(cat b)) + $(cat sqrt)" | bc -l > add'
- name: busybox
args:
- ash
- -c
- 'echo "-($(cat b)) - $(cat sqrt)" | bc -l > sub'
- name: busybox
args:
- ash
- -c
- 'echo "2 * $(cat a)" | bc -l > 2a'
- name: busybox
args:
- ash
- -c
- 'echo "$(cat add)/$(cat 2a)" | bc -l > root1'
- name: busybox
args:
- ash
- -c
- 'echo "$(cat sub)/$(cat 2a)" | bc -l > root2'
- name: busybox
args:
- ash
- -c
- 'echo "Roots are: $(cat root1); $(cat root2)"'
It expects 3 files (a, b, c) in ${PWD} containing the values of ax²+bx+c=0. So, for 8x²-10x+3:
echo "8" > a
echo "-10" > b
echo "3" > c
You can run it with:
gcloud builds submit ${PWD} \
--config=./cloudbuild.yaml \
--project=${PROJECT}
Explanation Rube Goldberg Cloud Build machine for solving Quadratic equations
A build is the process of creating artifacts from a source, and optionally modifying the state of any system you have access to.
An artifact can be a text file, a Docker container image or a Java archive.
To submit a build is to send your build resources (source files) to Cloud Storage, on the one hand. On the other hand, it is also to create a worker, which is a Google-managed GCE instance in a pool of instances dedicated to builds, instances that scale horizontally on demand and are destroyed when their assigned build is finished.
The worker reads the source files from Cloud Storage, and executes each build step in the configuration file, creating a Docker container for each step.
Each container executes the script in each respective step.
The concatenaton of container script executions is the build, properly speaking, and it produces artifacts.
There is always at least one artifact, a text file with the build log, which is pushed to Cloud Storage, and there can be container images or Java archives also produced as artifacts, which are pushed to the Container Registry.
I wouldn't say there is a difference between "create" and "submit" a build, but, if there is, it might be that "creating" a build is all the build from preparation to end, or preparing the source files, having the environment ready (project, permissions, quota, etc.), and "submitting" it would be just issuing the command submit or having a trigger submit it for you.

How can I call gcloud commands from a shell script during a build step?

I have automatic builds set up in Google Cloud, so that each time I push to the master branch of my repository, a new image is built and pushed to Google Container Registry.
These images pile up quickly, and I don't need all the old ones. So I would like to add a build step that runs a bash script which calls gcloud container images list-tags, loops the results, and deletes the old ones with gcloud container images delete.
I have the script written and it works locally. I am having trouble figuring out how to run it as a step in Cloud Builder.
It seems there are 2 options:
- name: 'ubuntu'
args: ['bash', './container-registry-cleanup.sh']
In the above step in cloudbuild.yml I try to run the bash command in the ubuntu image. This doesn't work because the gcloud command does not exist in this image.
- name: 'gcr.io/cloud-builders/gcloud'
args: [what goes here???]
In the above step in cloudbuild.yml I try to use the gcloud image, but since "Arguments passed to this builder will be passed to gcloud directly", I don't know how to call my bash script here.
What can I do?
You can customize the entry point of your build step. If you need gcloud installed, use the gcloud cloud builder and do this
step:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: "bash"
args:
- "-c"
- |
echo "enter 1 bash command per line"
ls -la
gcloud version
...
As per the official documentation Creating custom build steps indicates, you need a custom build step to execute a shell script from your source, the step's container image must contain a tool capable of running the script.
The below example, shows how to configure your args, for the execution to perform correctly.
steps:
- name: 'ubuntu'
args: ['bash', './myscript.bash']
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/custom-script-test', '.']
images: ['gcr.io/$PROJECT_ID/custom-script-test']
I would recommend you to take a look at the above documentation and the example as well, to test and confirm if it will help you achieve the execution of the script.
For your case, specifically, there is this other answer here, where is indicated that you will need to override the endpoint of the build to bash, so the script runs. It's indicated as follow:
- name: gcr.io/cloud-builders/gcloud
entrypoint: /bin/bash
args: ['-c', 'gcloud compute instances list > gce-list.txt']
Besides that, these two below articles, include more information and examples on how to configure customized scripts to run in your Cloud Build, that I would recommend you to take a look.
CI/CD: Google Cloud Build — Custom Scripts
Mastering Google Cloud Build Config Syntax
Let me know if the information helped you!

How to extract actual timestamp in Cloud Build CI/CD pipeline yaml script or Cloud Build Triggers page

I have a cloud_build.yaml script for my CI/CD pipeline on GP using Cloud Build. In command line I can pass a subtitution variable which will include the actual timestamp: "notebook-instance-$(date +%Y-%m-%d-%H-%M)-v05". This is working fine.
When I add github trigger on the Cloud Build webpage, then I didn't find the way to get the timestamp extracted in the same way that I was using in cli $(date +%Y-%m-%d-%H-%M)-v05:
Any idea idea how to do that on the Triggers Cloud Build page ?
I aslo tried to do it inside the cloud_build.yaml script but without success for now.
- name: 'gcr.io/cloud-builders/gcloud'
id: Deploy the AI Platform Notebook instance
args:
- 'deployment-manager'
- 'deployments'
- 'create'
- '$(date -u +%Y-%m-%d-%H-%M)-${_NAME_INSTANCE}'
Any idea how to extract and create a variable using the actual timestamp in the .yaml CloudBuild script ?
A third option is to extract the timestamp in my .jinja deployment script. Here I get the same issue as well that I don't find the way to to extract the actual timestampt to build my variable name.
One of the solution is to do the following:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: sh
args:
- '-c'
- |
gcloud \
deployment-manager \
deployments \
create \
xxxx
The issue is that you cannot use it in another step later. Another option is is to write te variable in a file on the workspace. This can be access later during the build stackoverflow

Execute a BigQuery query in Cloud Build step

I'm using Cloud Build with the gcloud builder. I override the entrypoint to be bq so I can run some BigQuery SQL in my build step. Previously, I had the SQL embedded directly in the YAML config for Cloud Build. This works fine:
steps:
- name: gcr.io/cloud-builders/gcloud
entrypoint: 'bq'
args: ['query', '--use_legacy_sql=false', 'SELECT 1']
Now I'd like to refactor the SQL out of the YAML and into a file instead. According to here, you can cat the file or pipe it to bq. This works on the command line without any problems.
But, I can't get it to work with Cloud Build. I've tried lots of different combinations, and escaping chars etc. but no matter what I try the shell doesn't evaluate/execute the cat my_query.sl backticks, and instead thinks that it's the query itself:
Works fine:
Build in Cloud Build it won't work:
steps:
- name: gcr.io/cloud-builders/gcloud
entrypoint: 'bq'
args: ['query', '--use_legacy_sql=false', '`cat my_query.sql`']
I also tried piping it instead of using cat, but I get the same error.
I must be missing something obvious here, but I can't see it. I could build a custom docker image, and wrap everything in a shell script, but I'd rather not have to do that if possible.
How do you use Cloud Build with shell evaluation inside a build step?
You can create a custom Bash script, e.g.:
#!/bin/bash
if [ $# -eq 0 ]; then
echo "No arguments supplied"
fi
bq query --use_legacy_sql=false < $1
Name this run_query.sh, then define your steps as:
steps:
- name: gcr.io/cloud-builders/gcloud
entrypoint: 'bash'
args: ['run_query.sh', 'my_query.sql']
Disclaimer: this is based on reading the docs, but I haven't actually used Cloud Build.
I have done this:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
dir: 'my/directory'
args: ['-c', 'bq --project_id=my-project-name query --use_legacy_sql=false < ./my_query.sql']
Which works with gcloud builds submit ... and eliminates one file if you prefer.

How can I save google cloud build step text output to file

I'm trying to use google cloud build. At one step, I need to get a list of all running compute instances.
- name: gcr.io/cloud-builders/gcloud
args: ['compute', 'instances', 'list']
and it works fine. Problem starts when I tried to save the output to a file
Trial 1: failed
- name: gcr.io/cloud-builders/gcloud
args: ['compute', 'instances', 'list', '> gce-list.txt']
Trial 2: failed
- name: gcr.io/cloud-builders/gcloud
args: ['compute', 'instances', 'list', '>', 'gce-list.txt']
Trial 3: failed
- name: gcr.io/cloud-builders/gcloud
args: >
compute instances list > gce-list.txt
Trial 4: failed
- name: gcr.io/cloud-builders/gcloud
args: |
compute instances list > gce-list.txt
UPDATE: 2018-09-04 17:50
Trial 5: failed
Build an gcloud image based on ubuntu
Used that image to run custom script file 'list-gce.sh'
list-gce.sh calls gcloud compute instances list
For more details you can check this gist:
https://gist.github.com/mahmoud-samy/e67f141e8b5d553de68a58a30a432ed2
Unfortunately I got this strange error:
rev 1
ERROR: (gcloud) unrecognized arguments: list (did you mean 'list'?)
rev 2
ERROR: (gcloud) unrecognized arguments: --version (did you mean '--version'?)
Any suggestions, or references?
In addition to other answers, to do cmd > foo.txt, you need to override the build entrypoint to bash (or sh):
- name: gcr.io/cloud-builders/gcloud
entrypoint: /bin/bash
args: ['-c', 'gcloud compute instances list > gce-list.txt']
Those commands are not executed in a shell, so shell operations such as pipes (|) and redirections (>) are not available.
Workaround
Use a gcloud container which does have a shell. The gcr.io/cloud-builders/gcloud container should have bash, as it is ultimately derived from an Ubuntu 16.04 image.
In your Cloud Build task sequence, execute a shell script which performs the gcloud calls for you and redirects the output to a file. This has some observations:
You'll need to store the shell script somewhere sensible; probably in your source repository so it becomes available to the build.
The gcloud container can still be used, as this will ensure the Google Cloud SDK tools are available to your script. You will need to override the entrypoint in the Cloud Build manifest to be /bin/bash, or some other shell, and pass the path to your script as an argument.
As DazWilkin identifies in a comment, the Cloud Build service account will also require the compute.instances.list permission to list instances.
The /workspace directory is mounted into all Cloud Build containers and its contents will be persisted between and accessible from subsequent build steps. If the output of the gcloud command, or a post-processed version, is require by subsequent build steps, you can write it out here.
Relevant Google documentation.