I'm using Cloud Build with the gcloud builder. I override the entrypoint to be bq so I can run some BigQuery SQL in my build step. Previously, I had the SQL embedded directly in the YAML config for Cloud Build. This works fine:
steps:
- name: gcr.io/cloud-builders/gcloud
entrypoint: 'bq'
args: ['query', '--use_legacy_sql=false', 'SELECT 1']
Now I'd like to refactor the SQL out of the YAML and into a file instead. According to here, you can cat the file or pipe it to bq. This works on the command line without any problems.
But, I can't get it to work with Cloud Build. I've tried lots of different combinations, and escaping chars etc. but no matter what I try the shell doesn't evaluate/execute the cat my_query.sl backticks, and instead thinks that it's the query itself:
Works fine:
Build in Cloud Build it won't work:
steps:
- name: gcr.io/cloud-builders/gcloud
entrypoint: 'bq'
args: ['query', '--use_legacy_sql=false', '`cat my_query.sql`']
I also tried piping it instead of using cat, but I get the same error.
I must be missing something obvious here, but I can't see it. I could build a custom docker image, and wrap everything in a shell script, but I'd rather not have to do that if possible.
How do you use Cloud Build with shell evaluation inside a build step?
You can create a custom Bash script, e.g.:
#!/bin/bash
if [ $# -eq 0 ]; then
echo "No arguments supplied"
fi
bq query --use_legacy_sql=false < $1
Name this run_query.sh, then define your steps as:
steps:
- name: gcr.io/cloud-builders/gcloud
entrypoint: 'bash'
args: ['run_query.sh', 'my_query.sql']
Disclaimer: this is based on reading the docs, but I haven't actually used Cloud Build.
I have done this:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
dir: 'my/directory'
args: ['-c', 'bq --project_id=my-project-name query --use_legacy_sql=false < ./my_query.sql']
Which works with gcloud builds submit ... and eliminates one file if you prefer.
Related
How can I add a custom message to Cloud Build logs?
I've tried using the bash entrypoint with the Docker builder (for example) and echoing some strings, but they don't appear in the build logs. Is there a way to achieve this?
Make sure that the builder image you're using has bash in it. I tested this code and replaced the gcloud builder with docker and it is working fine. Here's an example code:
steps:
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args:
- '-eEuo'
- 'pipefail'
- '-c'
- |-
if (( $(date '+%-e') % 2 )); then
echo "today is an odd day"
else
echo "today is an odd day, with an even number"
fi
And here's the Logs:
I have shell script that I use in order to be able to create my resources on Google Cloud Platform.
It looks something like this:
REGION=us-east1
# Create buckets
FILES_SOURCE=${DEVSHELL_PROJECT_ID}-source-$(date +%s)
gsutil mb -c regional -l ${REGION} gs://${FILES_SOURCE}
FUNCTIONS_BUCKET=${DEVSHELL_PROJECT_ID}-functions-$(date +%s)
gsutil mb -c regional -l ${REGION} gs://${FUNCTIONS_BUCKET}
I also have a Cloud Build enabled for my project with a trigger defined inside of it. Some of the values for my substitution variables should be equal to FILES_SOURCE and FUNCTIONS_BUCKET from the script above. If I have my Cloud Build enabled prior to the execution of my shell script, is it possible to somehow assign those values (and their keys) from the shell script?
I can see that we have gcloud builds interface but it doesn't seem to have such options.
You must be referring to user-defined substitution variables because default substitutions are automatically defined to you by Cloud Build. With regards to gcloud builds interface, you can set --substitutions flag to specify your user-defined variables but looking at your example, it seems that those aren't fixed.
Unfortunately you won't be able to specify user-defined substitution variables if the values came from a shell script. However, there's a workaround so that your shell script variables will persist the entire build steps by saving the values on a file and then read it as you require.
You've not specified how you intend to use the variables but here's an example:
build.sh
REGION=us-east1
DEVSHELL_PROJECT_ID=sample-proj
FUNCTIONS_BUCKET=${DEVSHELL_PROJECT_ID}-functions-$(date +%s)
FILES_SOURCE=${DEVSHELL_PROJECT_ID}-source-$(date +%s)
# Store variables on a file
echo $FUNCTIONS_BUCKET > /workspace/functions-bucket &&
echo $FILES_SOURCE > /workspace/files-source
echo "Saved values."
cloudbuild.yaml
steps:
- id: "Read script and store values"
name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args: ['./build.sh']
- id: "Read Values"
name: gcr.io/cloud-builders/gcloud
entrypoint: 'bash'
args:
- -c
- |
# Read from "/workspace"
echo "First we saved " $(cat /workspace/functions-bucket) &&
echo "Then we saved " $(cat /workspace/files-source)
Note: We used /workspace because Cloud Build uses it as a working directory by default.
Reference: https://medium.com/google-cloud/how-to-pass-data-between-cloud-build-steps-de5c9ebc4cdd
You can't override the substitution variables during the Cloud Build process. So, you have 2 solutions
Either you work with "Linux" variable, and the answer of Donnald is the right solution (you have to read the value from file in each step and then use it)
Or you can call a Cloud Build in Cloud Build. Like this
Create the Cloud Build file for your core build, with substitution variables
steps:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- -c
- |
echo $_FUNCTIONS_BUCKET
echo $_FILES_SOURCE
...
substitutions:
_FUNCTIONS_BUCKET:
_FILES_SOURCE:
Then, Create a the file for initialization: cloudbuild-init.yaml
steps:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- -c
- |
REGION=us-east1
DEVSHELL_PROJECT_ID=sample-proj
FUNCTIONS_BUCKET=$${DEVSHELL_PROJECT_ID}-functions-$$(date +%s)
FILES_SOURCE=$${DEVSHELL_PROJECT_ID}-source-$$(date +%s)
gcloud builds submit --async --substitutions=_FUNCTIONS_BUCKET=$${FUNCTIONS_BUCKET},_FILES_SOURCE=$${FILES_SOURCE}
You can note the async to not wait the end of the underlying Cloud Build before finishing the init one. Else you will pay twice the cost of the Build. On the other hand, you won't know in the trigger if your job worked or not.
It's matter of tradeoffs here.
I have automatic builds set up in Google Cloud, so that each time I push to the master branch of my repository, a new image is built and pushed to Google Container Registry.
These images pile up quickly, and I don't need all the old ones. So I would like to add a build step that runs a bash script which calls gcloud container images list-tags, loops the results, and deletes the old ones with gcloud container images delete.
I have the script written and it works locally. I am having trouble figuring out how to run it as a step in Cloud Builder.
It seems there are 2 options:
- name: 'ubuntu'
args: ['bash', './container-registry-cleanup.sh']
In the above step in cloudbuild.yml I try to run the bash command in the ubuntu image. This doesn't work because the gcloud command does not exist in this image.
- name: 'gcr.io/cloud-builders/gcloud'
args: [what goes here???]
In the above step in cloudbuild.yml I try to use the gcloud image, but since "Arguments passed to this builder will be passed to gcloud directly", I don't know how to call my bash script here.
What can I do?
You can customize the entry point of your build step. If you need gcloud installed, use the gcloud cloud builder and do this
step:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: "bash"
args:
- "-c"
- |
echo "enter 1 bash command per line"
ls -la
gcloud version
...
As per the official documentation Creating custom build steps indicates, you need a custom build step to execute a shell script from your source, the step's container image must contain a tool capable of running the script.
The below example, shows how to configure your args, for the execution to perform correctly.
steps:
- name: 'ubuntu'
args: ['bash', './myscript.bash']
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/custom-script-test', '.']
images: ['gcr.io/$PROJECT_ID/custom-script-test']
I would recommend you to take a look at the above documentation and the example as well, to test and confirm if it will help you achieve the execution of the script.
For your case, specifically, there is this other answer here, where is indicated that you will need to override the endpoint of the build to bash, so the script runs. It's indicated as follow:
- name: gcr.io/cloud-builders/gcloud
entrypoint: /bin/bash
args: ['-c', 'gcloud compute instances list > gce-list.txt']
Besides that, these two below articles, include more information and examples on how to configure customized scripts to run in your Cloud Build, that I would recommend you to take a look.
CI/CD: Google Cloud Build — Custom Scripts
Mastering Google Cloud Build Config Syntax
Let me know if the information helped you!
I have a cloud_build.yaml script for my CI/CD pipeline on GP using Cloud Build. In command line I can pass a subtitution variable which will include the actual timestamp: "notebook-instance-$(date +%Y-%m-%d-%H-%M)-v05". This is working fine.
When I add github trigger on the Cloud Build webpage, then I didn't find the way to get the timestamp extracted in the same way that I was using in cli $(date +%Y-%m-%d-%H-%M)-v05:
Any idea idea how to do that on the Triggers Cloud Build page ?
I aslo tried to do it inside the cloud_build.yaml script but without success for now.
- name: 'gcr.io/cloud-builders/gcloud'
id: Deploy the AI Platform Notebook instance
args:
- 'deployment-manager'
- 'deployments'
- 'create'
- '$(date -u +%Y-%m-%d-%H-%M)-${_NAME_INSTANCE}'
Any idea how to extract and create a variable using the actual timestamp in the .yaml CloudBuild script ?
A third option is to extract the timestamp in my .jinja deployment script. Here I get the same issue as well that I don't find the way to to extract the actual timestampt to build my variable name.
One of the solution is to do the following:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: sh
args:
- '-c'
- |
gcloud \
deployment-manager \
deployments \
create \
xxxx
The issue is that you cannot use it in another step later. Another option is is to write te variable in a file on the workspace. This can be access later during the build stackoverflow
I'm trying to use google cloud build. At one step, I need to get a list of all running compute instances.
- name: gcr.io/cloud-builders/gcloud
args: ['compute', 'instances', 'list']
and it works fine. Problem starts when I tried to save the output to a file
Trial 1: failed
- name: gcr.io/cloud-builders/gcloud
args: ['compute', 'instances', 'list', '> gce-list.txt']
Trial 2: failed
- name: gcr.io/cloud-builders/gcloud
args: ['compute', 'instances', 'list', '>', 'gce-list.txt']
Trial 3: failed
- name: gcr.io/cloud-builders/gcloud
args: >
compute instances list > gce-list.txt
Trial 4: failed
- name: gcr.io/cloud-builders/gcloud
args: |
compute instances list > gce-list.txt
UPDATE: 2018-09-04 17:50
Trial 5: failed
Build an gcloud image based on ubuntu
Used that image to run custom script file 'list-gce.sh'
list-gce.sh calls gcloud compute instances list
For more details you can check this gist:
https://gist.github.com/mahmoud-samy/e67f141e8b5d553de68a58a30a432ed2
Unfortunately I got this strange error:
rev 1
ERROR: (gcloud) unrecognized arguments: list (did you mean 'list'?)
rev 2
ERROR: (gcloud) unrecognized arguments: --version (did you mean '--version'?)
Any suggestions, or references?
In addition to other answers, to do cmd > foo.txt, you need to override the build entrypoint to bash (or sh):
- name: gcr.io/cloud-builders/gcloud
entrypoint: /bin/bash
args: ['-c', 'gcloud compute instances list > gce-list.txt']
Those commands are not executed in a shell, so shell operations such as pipes (|) and redirections (>) are not available.
Workaround
Use a gcloud container which does have a shell. The gcr.io/cloud-builders/gcloud container should have bash, as it is ultimately derived from an Ubuntu 16.04 image.
In your Cloud Build task sequence, execute a shell script which performs the gcloud calls for you and redirects the output to a file. This has some observations:
You'll need to store the shell script somewhere sensible; probably in your source repository so it becomes available to the build.
The gcloud container can still be used, as this will ensure the Google Cloud SDK tools are available to your script. You will need to override the entrypoint in the Cloud Build manifest to be /bin/bash, or some other shell, and pass the path to your script as an argument.
As DazWilkin identifies in a comment, the Cloud Build service account will also require the compute.instances.list permission to list instances.
The /workspace directory is mounted into all Cloud Build containers and its contents will be persisted between and accessible from subsequent build steps. If the output of the gcloud command, or a post-processed version, is require by subsequent build steps, you can write it out here.
Relevant Google documentation.