Continuous deployment from git using Cloud Build - google-cloud-platform

I am trying to make a build trigger for Cloud Run using this tutorial,
but I get the following error message:
Starting Step #0
Step #0: Already have image (with digest): gcr.io/cloud-builders/docker
Step #0: unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /workspace/Dockerfile: no such file or directory
Finished Step #0
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
Does anyone know why?
EDIT: My project repo is split into frontend and backend folders. I am just trying to deploy my backend folder which contains a go api.

I have followed the tutorial you provided and I encountered the same error message.
It seems like the steps specified inside the cloudbuild.yaml file are requiring a Dockerfile to be created on the repositories root folder. Precisely, the following instruction is building the image on your . folder.
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/[SERVICE-NAME]:$COMMIT_SHA', '.']
There are two solutions to your problem. If you need build a docker image, simply creating the Dockerfile will solve your issue. Another solution would be to not use a custom image. I have used the following cloudbuild.yaml file in order to deploy successfully:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args:
- 'run'
- 'deploy'
- '[SERVICE-NAME]'
- '--image'
- 'gcr.io/cloudrun/hello'
- '--region'
- '[REGION]'
- '--platform'
- 'managed'
Notice how I'm still using a container image (gcr.io/cloudrun/hello).
-- edit
As explained by #guillaume-blaquiere, the tutorial takes for granted that your repository is already working on Cloud Run. You should check a Cloud Run tutorial before this one.
-- edit 2
A third solution that worked for OP is to specify the path of the Dockerfile in the build instruction. That is done by changing the . directory for the relative directory that contains the Dockerfile.

The error says /workspace/Dockerfile: no such file or directory
I suppose your repository does not contain a Dockerfile at its root.

Related

Using 2 Dockerfiles in Cloud Build to re-use intermediary step image if CloudBuild fails

Cloud Build fails with Timeout Error (I'm trying to deploy CloudRun with Prophet). Therefore I'm trying to split the Dockerfile into two (saving the image in between in case it fails). I'd split the Dockerfile like this:
Dockerfile_one: python + prophet's dependencies
Dockerfile_two: image_from_Dockerfile_one + prophet + other dependencies
What should cloudbuild.yaml should look like to:
if there is a previously image available skip the step, else run the step with the Dockerfile_one and save the image
use the image from the step (1), add more dependencies to it and save the image for deploy
Here is how cloudbuild.yaml looks like right now
steps:
# create gcr source directory
- name: 'bash'
args:
- '-c'
- |
echo 'Creating gcr_source directory for ${_GCR_NAME}'
mkdir _gcr_source
cp -r cloudruns/${_GCR_NAME}/. _gcr_source
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/${_GCR_NAME}', '.']
dir: '_gcr_source'
# Push the container image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/${_GCR_NAME}']
# Deploy container image to Cloud Run
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: gcloud
args:
- run
- deploy
- ${_GCR_NAME}
- --image=gcr.io/$PROJECT_ID/${_GCR_NAME}
Thanks a lot!
You need to have 2 pipelines
The first one create the base image. Like that, you can trigger it everytime that you need to rebuild this base image, with, possibly a different lifecycle than your application lifecycle. Something similar to that
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/<PROJECT_ID>/base-image', '-f', 'DOCKERFILE_ONE', '.']
images: ['gcr.io/<PROJECT_ID>/base-image']
Then, in your second dockerfile, start from the base image and use a second Cloud Build pipeline to build, push and deploy it (as you do in your 3 last steps in your question)
FROM gcr.io/<PROJECT_ID>/base-image
COPY .....
....
...
Not the answer, but as a workaround. If anybody has the same issue, using Python3.8 instead 3.9 worked for Cloud Build.
This what the Dockerfile looks like:
RUN pip install --upgrade pip wheel setuptools
# Install pystan
RUN pip install Cython>=0.22
RUN pip install numpy>=1.7
RUN pip install pystan==2.19.1.1
# Install other prophet dependencies
RUN pip install -r requirements.txt
RUN pip install prophet
Though figuring out how to iteratively build images for CloudRun, would be really great.
Why did your Cloud Build fail with Timeout Error ?
While building images in docker, it is important to keep the image size down. Often multiple dockerfiles are created to handle the image size constraint. In your case, you were not able to reduce the image size and include only what is needed.
What can be done to rectify it ?
As per this documentation, multi-stage builds, (introduced in
Docker 17.05) allows you to build your app in a first "build"
container and use the result in another container, while using the
same Dockerfile.
You use multiple FROM statements in your Dockerfile. Each FROM
instruction can use a different base, and each of them begins a new
stage of the build. You can selectively copy artifacts from one stage
to another, leaving behind everything you don’t want in the final
image. To show how this works, follow this link.
You only need a single Dockerfile.
The result is the same tiny production image as before, with a
significant reduction in complexity. You don’t need to create any
intermediate images and you don’t need to extract any artifacts to
your local system at all.
How does it work?
You can name your build stages. By default, the stages are not
named, and you refer to them by their integer number, starting with 0
for the first FROM instruction. However, you can name your stages, by
adding an AS to the FROM instruction.
When you build your image, you don’t necessarily need to build the
entire Dockerfile including every stage. You can specify a target
build stage.
When using multi-stage builds, you are not limited to copying from
stages you created earlier in your Dockerfile. You can use the
COPY --from instruction to copy from a separate image,either
using the local image name, a tag available locally or on a Docker
registry, or a tag ID.
You can pick up where a previous stage left off by referring
to it when using the FROM directive.
In the Google documentation, there is an example of dockerfile
which uses multi-stage builds. The hello binary is built in a first
container and injected in a second one. Because the second container
is based on scratch, the resulting image contains only the hello
binary and not the source file and object files needed during the
build.
FROM golang:1.10 as builder
WORKDIR /tmp/go
COPY hello.go ./
RUN CGO_ENABLED=0 go build -a -ldflags '-s' -o hello
FROM scratch
CMD [ "/hello" ]
COPY --from=builder /tmp/go/hello /hello
Here is a tutorial to understand how multi staging builds work.

Cloud build can't open requirements.txt

I want to setup a cloud build trigger so that each time I modify (commit and push) main.py, it execute test_mainpytest.py with pytest
I have a project that look like this :
My_Project\function_one\
main.py
deploy.yaml
requirements.txt
dir_pytest\
test_mainpytest.py
My deploy.yaml contain thoose steps :
steps:
- name: 'python'
args: ['pip3', 'install', '-r', 'My_Project/function_one/requirements.txt', '--user']
- name: 'python'
args: ['python3', 'pytest', 'My_Project/function_one/dir_pytest/']
For the moment I just want to try to execute pytest using the trigger. When I execute the cloud build trigger, I get this error :
ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'My_Project/function_one/requirements.txt'
Also my project is saved in a google cloud repository.
Edit :
I tried to add dir in my step, so it currently look like this :
steps:
- name: 'python'
dir: 'MyProject/function_one/'
args: ['pip3', 'install', '-r', 'My_Project/function_one/requirements.txt', '--user']
- name: 'python'
dir: 'MyProject/function_one/'
args: ['python3', 'pytest', 'My_Project/function_one/dir_pytest/']
Yet I still get the error, (I also tried to put dir after args but it didn't change much
I also noticed; when executing the trigger in Cloud Build; thoose 2 lines :
Initialized empty Git repository in /workspace/.git/
From https://source.developers.google.com/p/my_id_1234/r/My_Project
Should I use https://source.developers.google.com/p/my_id_1234/r/My_Project and add the path to my requirement.txt and my py_test directory ?
Could you show your whole cloudbuild.yaml? If you are using a build trigger, the repository is imported directly in /workspace. If you are doing a git clone, then your repository is inside a directory with the name of the repository. The difference is:
/workspace/my-repository/My_Project/function_one/requirements.txt
versus
/workspace/My_Project/function_one/requirements.txt
If nothing else works, you can do ls -R to show you the directory structure within the build. Add this as a first build step:
- name: 'list recursively'
args: ['ls', '-R']
Notice that Cloud Build uses a directory called /workspace as a working directory in order to persist the contents. You can add the dir field within your cloudbuild.yaml file in order for Cloud Build to find the requirements.txt file and then run the tests.

Google Cloud Build - Terraform Self-Destruction on Build Failure

I'm currently facing an issue with my Google Cloud Build for CI/CD.
First, I build new docker images of multiple microservices and use Terraform to create the GCP infrastructure for the containers that they will also live in production.
Then I perform some Integration / System Tests and if everything is fine I push new versions of the microservice images to the container registry for later deployment.
My problem is, that the Terraformed infrastructure doesn't get destroyed if the cloud build fails.
Is there a way to always execute a cloud build step even if some previous steps have failed, here I would want to always execute "terraform destroy"?
Or specifically for Terraform, is there a way to define a self-destructive Terraform environment?
cloudbuild.yaml example with just one docker container
steps:
# build fresh ...
- id: build
name: 'gcr.io/cloud-builders/docker'
dir: '...'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/staging/...:latest', '-t', 'gcr.io/$PROJECT_ID/staging/...:$BUILD_ID', '.', '--file', 'production.dockerfile']
# push
- id: push
name: 'gcr.io/cloud-builders/docker'
dir: '...'
args: ['push', 'gcr.io/$PROJECT_ID/staging/...']
waitFor: [build]
# setup terraform
- id: terraform-init
name: 'hashicorp/terraform:0.12.28'
dir: '...'
args: ['init']
waitFor: [push]
# deploy GCP resources
- id: terraform-apply
name: 'hashicorp/terraform:0.12.28'
dir: '...'
args: ['apply', '-auto-approve']
waitFor: [terraform-init]
# tests
- id: tests
name: 'python:3.7-slim'
dir: '...'
waitFor: [terraform-apply]
entrypoint: /bin/sh
args:
- -c
- 'pip install -r requirements.txt && pytest ... --tfstate terraform.tfstate'
# remove GCP resources
- id: terraform-destroy
name: 'hashicorp/terraform:0.12.28'
dir: '...'
args: ['destroy', '-auto-approve']
waitFor: [tests]
Google Cloud Build doesn't yet support allow_failure or some similar mechanism as mentioned in this unsolved but closed issue.
What you can do, and as mentioned in the linked issue, is to chain shell conditional operators.
If you want to run a command on failure then you can do something like this:
- id: tests
name: 'python:3.7-slim'
dir: '...'
waitFor: [terraform-apply]
entrypoint: /bin/sh
args:
- -c
- pip install -r requirements.txt && pytest ... --tfstate terraform.tfstate || echo "This failed!"
This would run your test as normal and then echo This failed! to the logs if the tests fail. If you want to run terraform destroy -auto-approve on the failure then you would need to replace the echo "This failed!" with terraform destroy -auto-approve. Of course you will also need the Terraform binaries in the Docker image you are using so will need to use a custom image that has both Python and Terraform in it for that to work.
- id: tests
name: 'example-customer-python-and-terraform-image:3.7-slim-0.12.28'
dir: '...'
waitFor: [terraform-apply]
entrypoint: /bin/sh
args:
- -c
- pip install -r requirements.txt && pytest ... --tfstate terraform.tfstate || terraform destroy -auto-approve ; false"
The above job also runs false at the end of the command so that it will return a non 0 exit code and mark the job as failed still instead of only failing if terraform destroy failed as well.
An alternative to this would be to use something like Test Kitchen which will automatically stand up infrastructure, run the necessary verifiers and then destroy it at the end all in a single kitchen test command.
It's probably also worth mentioning that your pipeline is entirely serial so you don't need to use waitFor. This is mentioned in the Google Cloud Build documentation:
A build step specifies an action that you want Cloud Build to perform.
For each build step, Cloud Build executes a docker container as an
instance of docker run. Build steps are analogous to commands in a
script and provide you with the flexibility of executing arbitrary
instructions in your build. If you can package a build tool into a
container, Cloud Build can execute it as part of your build. By
default, Cloud Build executes all steps of a build serially on the
same machine. If you have steps that can run concurrently, use the
waitFor option.
and
Use the waitFor field in a build step to specify which steps must run
before the build step is run. If no values are provided for waitFor,
the build step waits for all prior build steps in the build request to
complete successfully before running. For instructions on using
waitFor and id, see Configuring build step order.

Google Cloud Build Trigger failing with "ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1"

I am trying to setup continuous deployment of my golang backend using the Google documentation, but when my trigger fires, it fails with the following error:
starting build "eba3ce39-caad-43f0-a255-0a3cacec4913"
FETCHSOURCE
Initialized empty Git repository in /workspace/.git/
From https://source.developers.google.com/p/my-porject/r/github_myusername_myproject.com
* branch 660796f575bae6860d6f96df60cfd631a730c3ae -> FETCH_HEAD
HEAD is now at 660796f cloudbuild.yaml
BUILD
Starting Step #0
Step #0: Already have image (with digest): gcr.io/cloud-builders/docker
Step #0: unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /workspace/Dockerfile: no such file or directory
Finished Step #0
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
My project file structure looks like:
project
frontend
backend
main.go
cloudbuild.yaml
Dockerfile
where my cloudbuild.yaml looks like:
steps:
# Build the container image
- name: "gcr.io/cloud-builders/docker"
args:
[
"build",
"-t",
"gcr.io/my-project/github.com/username/project.com:$COMMIT_SHA",
".",
]
# Push the image to Container Registry
- name: "gcr.io/cloud-builders/docker"
args:
[
"push",
"gcr.io/my-project/github.com/username/project.com:$COMMIT_SHA",
]
# Deploy image to Cloud Run
- name: "gcr.io/cloud-builders/gcloud"
args:
- "run"
- "deploy"
- "[SERVICE_NAME]"
- "--image"
- "gcr.io/my-project/github.com/username/project.com:$COMMIT_SHA"
- "--region"
- "us-central1"
- "--platform"
- "managed"
images:
- gcr.io/my-project/github.com/username/project.com
and my Dockerfile looks like
# Use the official Golang image to create a build artifact.
# This is based on Debian and sets the GOPATH to /go.
# https://hub.docker.com/_/golang
FROM golang:1.13 as builder
# Create and change to the app directory.
WORKDIR /app
# Retrieve application dependencies.
# This allows the container build to reuse cached dependencies.
COPY go.* ./
RUN go mod download
# Copy local code to the container image.
COPY . ./
# Build the binary.
RUN CGO_ENABLED=0 GOOS=linux go build -mod=readonly -v -o server
# Use the official Alpine image for a lean production container.
# https://hub.docker.com/_/alpine
# https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
FROM alpine:3
RUN apk add --no-cache ca-certificates
# Copy the binary to the production image from the builder stage.
COPY --from=builder /app/server /server
# Run the web service on container startup.
CMD ["/server"]
I got the Dockerfile from Quickstart: Build and Deploy
.
When you execute a push command to your github repo, the Cloud Build will triggers and look for the cloudbuild.yaml file. You can specify the cloudbuild.yaml location when you create the build trigger by editing the Configuration section and Cloud Build configuration file (yaml or json) in which you can choose the cloudbuild.yaml location. in your case just make it backend/cloudbuild.yaml.
Now, that's not enough because when the build start, docker build command will initiate to build your image as per your first step. However, your build context for docker is . which should not be because all your repo was copied to GCP and the build context here is relational to the project and not where the cloud build is.
To solve this issue just change the build context of docker to ./backend. Your cloudbuild final version should be something like:
steps:
# Build the container image
- name: "gcr.io/cloud-builders/docker"
args:
[
"build",
"-t",
"gcr.io/my-project/github.com/username/project.com:$COMMIT_SHA",
"./backend",
]
#Rest of the steps ...
The Cloud Build trigger is currently pointing to /project/ while your directory structure is as follows:
project
frontend
backend
main.go
cloudbuild.yaml
Dockerfile
When you execute the trigger, the directory workspace is copied to /workspace/, thus it cannot find the Dockerfile therein.
You can move everything to the same working directory.
.
├── main.go
├── cloudbuild.yaml
├── Dockerfile
If you would like to keep your current directory structure,your Cloud Build trigger will need to point to /project/backend/, instead. Note that you can check your directory structure using the ls -la linux command.

Docker image fails to build on Google Container Registry

I have setup a trigger from Bitbucket to Google Container Registry.
I have a Dockerfile in the root, and am able to build the container fine from my local machine.
I get this error in Google Container Registry when the trigger runs (I did not modify the command that GCR wanted to run - it's the default). My project name has been replaced with "project":
FETCHSOURCE
Initialized empty Git repository in /workspace/.git/
From https://source.developers.google.com/p/project/r/bitbucket-project-gateway
* branch c65f16b3f52262a047c71e7140aecc4300265497 -> FETCH_HEAD
HEAD is now at c65f16b testing
BUILD
Already have image (with digest): gcr.io/cloud-builders/docker
invalid argument "gcr.io/project/bitbucket-project-gateway:" for t: invalid reference format
See 'docker build --help'.
ERROR
ERROR: build step "gcr.io/cloud-builders/docker#sha256:e576df764ae28d3c072019a235b6c8966df11eecb472c59b0963d783bb8a713b" failed: exit status 125
It looks like the image's tag is missing (after the ":").
Do you have a cloudbuild.yaml config file? If so do you use some substitutions variables (e.g. $REVISION_ID)? Maybe there is a misspelling there?
Cheers,
Philmod
For others who come along, when running into this same issue when pushing a Dockerfile with a Cloud Build YAML file - my mistakes:
Had ${SHORT_SHA} in one place and not the other (was on the Artifact push and not the build) [https://stackoverflow.com/a/44716934/18176030credit to Philmod for the tag not being right]
I was using the "grc.io" on during the build process and not the Artifact push (was using "us-east1-docker.pkg.dev").