Google cloud build - custom machine type - google-cloud-platform

I'm using the Google Cloud Build service to create images of my application. I created a build trigger that looks for a git tag in a specific format. Each time that Cloud Build detects a new tag, a new build is performed.
Since the build time is pretty long, I am trying to make it faster.
I found that it's possible to ask Google to build the application on a faster machine (Source).
gcloud builds submit --config=cloudbuild.yaml --machine-type=n1-highcpu-8 .
This code works if you choose the manual build option. Since I created the build trigger from the GCP user interface, I can't find any place to define the machine-type argument.
How can I choose the machine-type on automatic build triggers?
UPDATE:
In the Trigger window, I chose Build Configuration=Docker File and this is my docker file preview:
docker build \
-t gcr.io/PROJ_NAME/APP_NAME/$TAG_NAME:$COMMIT_SHA \
-f deployments/docker/APPNAME.docker \
.
How should my buildconfig.yaml file look like?

You need to change to Build Configuration=Cloud Build configuration file, and commit the cloudbuild.yaml to git.
Then use the machineType field in the options property of your cloudbuild.yaml file.
E.g
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/PROJ_NAME/APP_NAME/$TAG_NAME:$COMMIT_SHA', '-f', 'deployments/docker/APPNAME.docker', '.']
options:
machineType: 'N1_HIGHCPU_8'

Related

Azure DevOps YAML self hosted agent pipeline build is stuck at locating self-agent

Action: I tried to configure and run a simple c++ azure pipeline on a self-hosted windows computer. I'm pretty new to all this. I ran the script below.
Expected: to see build task, display task and clean task. to see hello word.
Result: Error, script can't find my build agent.
##[warning]An image label with the label Weltgeist does not exist.
,##[error]The remote provider was unable to process the request.
Pool: Azure Pipelines
Image: Weltgeist
Started: Today at 10:16 p.m.
Duration: 14m 23s
Info & Test:
My self-hosted agent name is Weltgeist and it's part of the
default agent pools.it's a windows computer, with all g++, mingw and
other related tools on it.
I tried my build task locally with no problem.
I tried my build task using azure 'ubuntu-latest' agent with no
problem.
I created the self-hosted agent following these specification.
I'm the owner of the azure repo.
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/v2-windows?view=azure-devops
How do I configure correctly the pool ymal parameter for self-hosted agent ?
Do i have addition steps to do server side? or on azure repo configs?
Any other idea of what went wrong?
# Starter pipeline
# Start with a minimal pipeline that you can customize to build and deploy your code.
# Add steps that build, run tests, deploy, and more:
# https://aka.ms/yaml
trigger:
- master
pool:
vmImage: 'Weltgeist' #Testing with self-hosted agent
steps:
- script: |
mkdir ./build
g++ -g ./src/hello-world.cpp -o ./build/hello-world.exe
displayName: 'Run a build script'
- script: |
./build/hello-world.exe
displayName: 'Run Display task'
- script: |
rm -r build
displayName: 'Clean task'
(UPDATE)
Solution:
Thx, after updating it as stated in a answer below and reading a bit more pool ymal definition it works. Note, I modified a couple of other lines to make it work on my environment.
trigger:
- master
pool:
name: Default
demands:
- agent.name -equals Weltgeist
steps:
- script: |
mkdir build
g++ -o ./build/hello-world.exe ./src/hello-world.cpp
displayName: 'Run a build script'
- script: |
cd build
hello-world.exe
cd ..
displayName: 'Run Display task'
- script: |
rm -r build
displayName: 'Clean task'
I was confused by the Default because there was already a pipeline named Default in the organization.
Expanding on the answers provided here.
pool:
name: NameOfYourPool
demands:
- agent.name -equals NameOfYourAgent
Here is screen you'll find that information in DevOps.
Since you are using the self-hosted agent, you could use the following format:
pool:
name: Default
demands:
- agent.name -equals Weltgeist
Then it should work as expected.
You could refer to the doc about POOL Definition in Yaml.
I had faced the same issue and replacing vmImage under pool with "name" worked for me. PFB,
trigger:
master
pool:
name: 'Weltgeist' #Testing with self-hosted agent
Also be aware that if your agent only appears in the "Azure Pipelines" pool and not in any of the other pools then the agent may have been configured to be an "Environment" resource, and can't be used as part of the build step.
I spent ages trying to use a self-hosted VM for a build step, thinking that the correct way to reference the VM was by creating a VM resource from the Pipelines > Enviroments area:
The agent would be properly created and visible in the "Azure Pipelines" pool, but wouldn't be available in any of the other pools, which then meant it couldn't be referenced in the yaml used for setting the server used for builds.
I was able to resolve the issue, by de-registering the agent on my self-hosted VM with .\config.cmd remove and running ./config without the --environment --environmentname "<name>" that was provided within the registration script mentioned above (shown in the "Add reseouce" screenshot)
Oddly, the registration script is a much quicker way to register an Agent than the "New agent" form shown in Agent Pools:
The necessary files are pulled to the server (without having to download one first) and a PAT with a 3-hour lifetime is auto-generated.

How can I call gcloud commands from a shell script during a build step?

I have automatic builds set up in Google Cloud, so that each time I push to the master branch of my repository, a new image is built and pushed to Google Container Registry.
These images pile up quickly, and I don't need all the old ones. So I would like to add a build step that runs a bash script which calls gcloud container images list-tags, loops the results, and deletes the old ones with gcloud container images delete.
I have the script written and it works locally. I am having trouble figuring out how to run it as a step in Cloud Builder.
It seems there are 2 options:
- name: 'ubuntu'
args: ['bash', './container-registry-cleanup.sh']
In the above step in cloudbuild.yml I try to run the bash command in the ubuntu image. This doesn't work because the gcloud command does not exist in this image.
- name: 'gcr.io/cloud-builders/gcloud'
args: [what goes here???]
In the above step in cloudbuild.yml I try to use the gcloud image, but since "Arguments passed to this builder will be passed to gcloud directly", I don't know how to call my bash script here.
What can I do?
You can customize the entry point of your build step. If you need gcloud installed, use the gcloud cloud builder and do this
step:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: "bash"
args:
- "-c"
- |
echo "enter 1 bash command per line"
ls -la
gcloud version
...
As per the official documentation Creating custom build steps indicates, you need a custom build step to execute a shell script from your source, the step's container image must contain a tool capable of running the script.
The below example, shows how to configure your args, for the execution to perform correctly.
steps:
- name: 'ubuntu'
args: ['bash', './myscript.bash']
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/custom-script-test', '.']
images: ['gcr.io/$PROJECT_ID/custom-script-test']
I would recommend you to take a look at the above documentation and the example as well, to test and confirm if it will help you achieve the execution of the script.
For your case, specifically, there is this other answer here, where is indicated that you will need to override the endpoint of the build to bash, so the script runs. It's indicated as follow:
- name: gcr.io/cloud-builders/gcloud
entrypoint: /bin/bash
args: ['-c', 'gcloud compute instances list > gce-list.txt']
Besides that, these two below articles, include more information and examples on how to configure customized scripts to run in your Cloud Build, that I would recommend you to take a look.
CI/CD: Google Cloud Build — Custom Scripts
Mastering Google Cloud Build Config Syntax
Let me know if the information helped you!

Google cloud build cli - get artifact URL while build is ongoing

I'm using a Google cloud build trigger to build my container image. After pushing a new build (push by git tag), I can see it on my Cloud Build history.
As you can see, the build already has an artifact URI (the URI of the image). But, when calling the gcloud build list - it's not there:
It only appears in the IMAGE column AFTER the building is completed.
How to get the artifact build while building?
IIUC, builds and artifacts are different things.
When you submit gcloud builds, you're creating build(job)s that are identified by build IDs.
Builds may result in the creation of artifacts (usually -- but not limited to-- container images). It is only during the conclusion of a build that artifacts are generated and so you would not expect these to be available until then.
steps:
- name: "gcr.io/cloud-builders/docker"
args:
- build
- -t
- "gcr.io/your-project/your-image
- .
images:
- "gcr.io/your-project/your-image"
Or:
artifacts:
objects:
location: [STORAGE_LOCATION]
paths:
- [ARTIFACT_PATH]
- [ARTIFACT_PATH]
- ...
Because you specify images|artifacts in your build spec, you can infer (before these are created), where they will result and you could query them directly then. In the case of container images pushed, you may query these using gcloud container images list, perhaps (per the above) gcloud container image list --repository=gcr.io/your-project
See:
https://cloud.google.com/cloud-build/docs/configuring-builds/store-images-artifacts#artifacts_examples
HTH!

How to extract actual timestamp in Cloud Build CI/CD pipeline yaml script or Cloud Build Triggers page

I have a cloud_build.yaml script for my CI/CD pipeline on GP using Cloud Build. In command line I can pass a subtitution variable which will include the actual timestamp: "notebook-instance-$(date +%Y-%m-%d-%H-%M)-v05". This is working fine.
When I add github trigger on the Cloud Build webpage, then I didn't find the way to get the timestamp extracted in the same way that I was using in cli $(date +%Y-%m-%d-%H-%M)-v05:
Any idea idea how to do that on the Triggers Cloud Build page ?
I aslo tried to do it inside the cloud_build.yaml script but without success for now.
- name: 'gcr.io/cloud-builders/gcloud'
id: Deploy the AI Platform Notebook instance
args:
- 'deployment-manager'
- 'deployments'
- 'create'
- '$(date -u +%Y-%m-%d-%H-%M)-${_NAME_INSTANCE}'
Any idea how to extract and create a variable using the actual timestamp in the .yaml CloudBuild script ?
A third option is to extract the timestamp in my .jinja deployment script. Here I get the same issue as well that I don't find the way to to extract the actual timestampt to build my variable name.
One of the solution is to do the following:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: sh
args:
- '-c'
- |
gcloud \
deployment-manager \
deployments \
create \
xxxx
The issue is that you cannot use it in another step later. Another option is is to write te variable in a file on the workspace. This can be access later during the build stackoverflow

GitHub Cloud Build Integration with multiple cloudbuild.yamls in monorepo

GitHub's Google Cloud Build integration does not detect a cloudbuild.yaml or Dockerfile if it is not in the root of the repository.
When using a monorepo that contains multiple cloudbuild.yamls, how can GitHub's Google Cloud Build integration be configured to detect the correct cloudbuild.yaml?
File paths:
services/api/cloudbuild.yaml
services/nginx/cloudbuild.yaml
services/websocket/cloudbuild.yaml
Cloud Build integration output:
You can do this by adding a cloudbuild.yaml in the root of your repository with a single gcr.io/cloud-builders/gcloud step. This step should:
Traverse each subdirectory or use find to locate additional cloudbuild.yaml files.
For each found cloudbuild.yaml, fork and submit a build by running gcloud builds submit.
Wait for all the forked gcloud commands to complete.
There's a good example of one way to do this in the root cloudbuild.yaml within the GoogleCloudPlatform/cloud-builders-community repo.
If we strip out the non-essential parts, basically you have something like this:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- '-c'
- |
for d in */; do
config="${d}cloudbuild.yaml"
if [[ ! -f "${config}" ]]; then
continue
fi
echo "Building $d ... "
(
gcloud builds submit $d --config=${config}
) &
done
wait
We are migrating to a mono-repo right now, and I haven't found any CI/CD solution that handles this well.
The key is to not only detect changes, but also any services that depend on that change. Here is what we are doing:
Requiring every service to have a MAKEFILE with a build command.
Putting a cloudbuild.yaml at the root of the mono repo
We then run a custom build step with this little tool (old but still seems to work) https://github.com/jharlap/affected which lists out all packages have changed and all packages that depend on those packages, etc.
then the shell script will run make build on any service that is affected by the change.
So far it is working well, but I totally understand if this doesn't fit your workflow.
Another option many people use is Bazel. Not the most simple tool, but especially great if you have many different languages or build processes across your mono repo.
You can create a build trigger for your repository. When setting up a trigger with cloudbuild.yaml for build configuration, you need to provide the path to the cloudbuild.yaml within the repository.