Google Cloud Run inaccessible even on successful build - google-cloud-platform

My Google Cloud Run image was build successfully using Cloud Build via Github repo. I don't see anything concerning in the build logs.
This is my Dockerfile:
# Use the official lightweight Node.js 10 image.
# https://hub.docker.com/_/node
FROM node:17-slim
RUN set -ex; \
apt-get -y update; \
apt-get -y install ghostscript; \
apt-get -y install pngquant; \
rm -rf /var/lib/apt/lists/*
# Create and change to the app directory.
WORKDIR /usr/src/app
# Copy application dependency manifests to the container image.
# A wildcard is used to ensure both package.json AND package-lock.json are copied.
# Copying this separately prevents re-running npm install on every code change.
COPY package*.json ./
# Install dependencies.
# If you add a package-lock.json speed your build by switching to 'npm ci'.
RUN npm ci --only=production
# RUN npm install --production
# Copy local code to the container image.
COPY . ./
# Run the web service on container startup.
CMD [ "npm", "start" ]
But when I try to access the cloud through the public URL I see:
Oops, something went wrong…
Continuous deployment has been set up, but your repository has failed to build and deploy.
This revision is a placeholder until your code successfully builds and deploys to the Cloud Run service myapi in asia-east1 of the GCP project myproject.
What's next?
From the Cloud Run service page, click "Build History".
Examine your build logs to understand why it failed.
Fix the issue in your code or Dockerfile (if any).
Commit and push the change to your repository.
It appears that the node app did not run. What am I doing wrong?

Turns out that cloudbuild.yaml is not really optional. Adding the file with the following resolved the issue:
steps:
# Build the container image
- name: "gcr.io/cloud-builders/docker"
args: ["build", "-t", "gcr.io/$PROJECT_ID/myapi:$COMMIT_SHA", "."]
# Push the container image to Container Registry
- name: "gcr.io/cloud-builders/docker"
args: ["push", "gcr.io/$PROJECT_ID/myapi:$COMMIT_SHA"]
# Deploy container image to Cloud Run
- name: "gcr.io/google.com/cloudsdktool/cloud-sdk"
entrypoint: gcloud
args:
- "run"
- "deploy"
- "myapi"
- "--image"
- "gcr.io/$PROJECT_ID/myapi:$COMMIT_SHA"
- "--region"
- "asia-east1"
images:
- "gcr.io/$PROJECT_ID/myapi:$COMMIT_SHA"

Related

Accessing Cloud SQL from Cloud Run on Google Cloud

I have a Cloud Run service that accesses a Cloud SQL instance through SQLAlchemy. However, in the logs for Cloud Run, I see CloudSQL connection failed. Please see https://cloud.google.com/sql/docs/mysql/connect-run for additional details: ensure that the account has access to "<connection_string>". Going to that link, it says that:
"By default, your app will authorize your connections using the Cloud Run (fully managed) service account. The service account is in the format PROJECT_NUMBER-compute#developer.gserviceaccount.com."
However, the following (https://cloud.google.com/run/docs/securing/service-identity) says:
"By default, Cloud Run revisions are using the Compute Engine default service account (PROJECT_NUMBER-compute#developer.gserviceaccount.com), which has the Project > Editor IAM role. This means that by default, your Cloud Run revisions have read and write access to all resources in your Google Cloud project."
So shouldn't that mean that Cloud Run can already access SQL? I've already set up the Cloud SQL Connection in the Cloud Run deployment page. What do you suggest I do to allow access to Cloud SQL from Cloud Run?
EDIT: I have to enable the Cloud SQL API.
No, Cloud Run cannot access to Cloud SQL by default. You need to follow one of the two paths.
Connect to SQL using a local unix socket file: You need to configure permissions like you said and deploy with flags indicating intent to connect to the database. Follow https://cloud.google.com/sql/docs/mysql/connect-run
Connect to SQL with a private IP: This involves deploying Cloud SQL instance into a VPC Network and therefore having it get a private IP address. Then you use Cloud Run VPC Access Connector (currently beta) to allow Cloud Run container to be able to connect to that VPC network, which includes SQL database's IP address directly (no IAM permissions needed). Follow https://cloud.google.com/vpc/docs/configure-serverless-vpc-access
Cloud SQL Proxy solution
I use the cloud-sql-proxy to create a local unix socket file in the workspace directory provided by Cloud Build.
Here are the main steps:
Pull a Berglas container populating its call with the _VAR1 substitution, an environment variable I've encrypted using Berglas called CMCREDENTIALS. You should add as many of these _VAR{n} as you require.
Install the cloudsqlproxy via wget.
Run an intermediate step (tests for this build). This step uses the variables stored in the provided temporary /workspace directory.
Build your image.
Push your image.
Using Cloud Run, deploy and include the flag --set-environment-variables
The full cloudbuild.yaml
# basic cloudbuild.yaml
steps:
# pull the berglas container and write the secrets to temporary files
# under /workspace
- name: gcr.io/berglas/berglas
id: 'Install Berglas'
env:
- '${_VAR1}=berglas://${_BUCKET_ID_SECRETS}/${_VAR1}?destination=/workspace/${_VAR1}'
args: ["exec", "--", "/bin/sh"]
# install the cloud sql proxy
- id: 'Install Cloud SQL Proxy'
name: alpine:latest
entrypoint: sh
args:
- "-c"
- "\
wget -O /workspace/cloud_sql_proxy \
https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 && \
sleep 2 && \
chmod +x /workspace/cloud_sql_proxy"
waitFor: ['-']
# using the secrets from above, build and run the test suite
- name: 'python:3.8.3-slim'
id: 'Run Unit Tests'
entrypoint: '/bin/bash'
args:
- "-c"
- "\
(/workspace/cloud_sql_proxy -dir=/workspace/${_SQL_PROXY_PATH} -instances=${_INSTANCE_NAME1} & sleep 2) && \
apt-get update && apt-get install -y --no-install-recommends \
build-essential libssl-dev libffi-dev libpq-dev python3-dev wget && \
rm -rf /var/lib/apt/lists/* && \
export ${_VAR1}=$(cat /workspace/${_VAR1}) && \
export INSTANCE_NAME1=${_INSTANCE_NAME1} && \
export SQL_PROXY_PATH=/workspace/${_SQL_PROXY_PATH} && \
pip install -r dev-requirements.txt && \
pip install -r requirements.txt && \
python -m pytest -v && \
rm -rf /workspace/${_SQL_PROXY_PATH} && \
echo 'Removed Cloud SQL Proxy'"
waitFor: ['Install Cloud SQL Proxy', 'Install Berglas']
dir: '${_APP_DIR}'
# Using the application/Dockerfile build instructions, build the app image
- name: 'gcr.io/cloud-builders/docker'
id: 'Build Application Image'
args: ['build',
'-t',
'gcr.io/$PROJECT_ID/${_IMAGE_NAME}',
'.',
]
dir: '${_APP_DIR}'
# Push the application image
- name: 'gcr.io/cloud-builders/docker'
id: 'Push Application Image'
args: ['push',
'gcr.io/$PROJECT_ID/${_IMAGE_NAME}',
]
# Deploy the application image to Cloud Run
# populating secrets via Berglas exec ENTRYPOINT for gunicorn
- name: 'gcr.io/cloud-builders/gcloud'
id: 'Deploy Application Image'
args: ['beta',
'run',
'deploy',
'${_IMAGE_NAME}',
'--image',
'gcr.io/$PROJECT_ID/${_IMAGE_NAME}',
'--region',
'us-central1',
'--platform',
'managed',
'--quiet',
'--add-cloudsql-instances',
'${_INSTANCE_NAME1}',
'--set-env-vars',
'SQL_PROXY_PATH=/${_SQL_PROXY_PATH},INSTANCE_NAME1=${_INSTANCE_NAME1},${_VAR1}=berglas://${_BUCKET_ID_SECRETS}/${_VAR1}',
'--allow-unauthenticated',
'--memory',
'512Mi'
]
# Use the defaults below which can be changed at the command line
substitutions:
_IMAGE_NAME: your-image-name
_BUCKET_ID_SECRETS: your-bucket-for-berglas-secrets
_INSTANCE_NAME1: project-name:location:dbname
_SQL_PROXY_PATH: cloudsql
_VAR1: CMCREDENTIALS
# The images we'll push here
images: [
'gcr.io/$PROJECT_ID/${_IMAGE_NAME}'
]
Dockerfile utilized
The below builds a Python app from source contained inside the directory <myrepo>/application. This dockerfile sits under application/Dockerfile.
# Use the official lightweight Python image.
# https://hub.docker.com/_/python
FROM python:3.8.3-slim
# Add build arguments
# Copy local code to the container image.
ENV APP_HOME /application
WORKDIR $APP_HOME
# Install production dependencies.
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
libpq-dev \
python3-dev \
libssl-dev \
libffi-dev \
&& rm -rf /var/lib/apt/lists/*
# Copy the application source
COPY . ./
# Install Python dependencies
RUN pip install -r requirements.txt --no-cache-dir
# Grab Berglas from Google Cloud Registry
COPY --from=gcr.io/berglas/berglas:latest /bin/berglas /bin/berglas
# Run the web service on container startup. Here we use the gunicorn
# webserver, with one worker process and 8 threads.
# For environments with multiple CPU cores, increase the number of workers
# to be equal to the cores available.
ENTRYPOINT exec /bin/berglas exec -- gunicorn --bind :$PORT --workers 1 --threads 8 app:app
Hope this helps someone, though possibly too specific (Python + Berglas) for the original OP.

Bitbucket Pipelines to build Java app, Docker image and push it to AWS ECR?

I am setting up Bitbucket Pipelines for my Java app and what I want to achive is whenever I merge something with branch master, Bitbucket fires the pipeline, which in first step build and test my application, and in second step build Docker image from it and push it to ECR. The problem is that in second step it isn't possible to use the JAR file made in first step, because every step is made in a separate, fresh Docker container. Any ideas how to solve it?
My current files are:
1) Bitbucket-pipelines.yaml
pipelines:
branches:
master:
- step:
name: Build and test application
services:
- docker
image: openjdk:11
caches:
- gradle
script:
- apt-get update
- apt-get install -y python-pip
- pip install --no-cache-dir docker-compose
- bash ./gradlew clean build test testIntegration
- step:
name: Build and push image
services:
- docker
image: atlassian/pipelines-awscli
caches:
- gradle
script:
- echo $(aws ecr get-login --no-include-email --region us-west-2) > login.sh
- sh login.sh
- docker build -f Dockerfile -t my-application .
- docker tag my-application:latest 212234103948.dkr.ecr.us-west-2.amazonaws.com/my-application:latest
- docker push 212234103948.dkr.ecr.us-west-2.amazonaws.com/my-application:latest
2) Dockerfile:
FROM openjdk:11
VOLUME /tmp
EXPOSE 8080
COPY build/libs/*.jar app.jar
ENTRYPOINT ["java", "-jar", "/app.jar"]
And the error I receive:
Step 4/5 : COPY build/libs/*.jar app.jar
COPY failed: no source files were specified
I have found the solutions, it's quite simple - we should just use "artifacts" feature, so in first step the additional line:
artifacts:
- build/libs/*.jar
solves the problem.

Running Cypress in Google Cloud Build

I need to run cypress e2e tests in Google Cloud Build. I get the error that I need to install cypresses dependencies when I just run id: End to End Test. So, I attempted to download the dependencies but this occurs:
E: Unable to locate package libasound2'
E: Unable to locate package libxss1
E: Unable to locate package libnss3
E: Unable to locate package libgconf-2-4
E: Unable to locate package libnotify-dev
E: Couldn't find any package by regex 'libgtk2.0-0'
E: Couldn't find any package by glob 'libgtk2.0-0'
E: Unable to locate package libgtk2.0-0
E: Unable to locate package xvfb
Reading state information...
Building dependency tree...
Reading package lists...
Status: Downloaded newer image for ubuntu:latest
Digest: sha256:eb70667a801686f914408558660da753cde27192cd036148e58258819b927395
latest: Pulling from library/ubuntu
Using default tag: latest
Pulling image: ubuntu
How can I run cypress in Google Cloud Build?
cloudbuild.yaml
steps:
... npm setup ...
- name: 'ubuntu'
id: Install Cypress Dependencies
args:
[
'apt-get',
'install',
'xvfb',
'libgtk2.0-0',
'libnotify-dev',
'libgconf-2-4',
'libnss3',
'libxss1',
libasound2',
]
- name: 'gcr.io/cloud-builders/npm:current'
id: End to End Test
args: ['run', 'e2e:gcb']
So the issue with what you have is that steps are meant to be isolated from one another. running apt-get update works but does not persist when you attempt to apt-get install the required dependencies. Only data in the project directory (which defaults to /workspace) is persisted between steps.
Rather than trying to figure out a workaround with that, I was able to successfully get Cypress to run in Google Cloud Build by using the Cypress Docker image. One thing to note is that you will also have to cache the Cypress install inside the workspace folder during the npm install step. You'll also probably want to add the .tmp directory to your .gcloudignore
- name: node
id: Install Dependencies
entrypoint: yarn
args: ['install']
env:
- 'CYPRESS_CACHE_FOLDER=/workspace/.tmp/Cypress'
And then you can run the tests like so
- name: docker
id: Run E2Es
args:
[
'run',
'--workdir',
'/e2e',
'--volume',
'/workspace:/e2e',
'--env',
'CYPRESS_CACHE_FOLDER=/e2e/.tmp/Cypress',
'--ipc',
'host',
'cypress/included:3.2.0'
]
Or, if you want to run a custom command rather than the default cypress run, you can do
- name: docker
id: Run E2Es
args:
[
'run',
'--entrypoint',
'yarn',
'--workdir',
'/e2e',
'--volume',
'/workspace:/e2e',
'--env',
'CYPRESS_CACHE_FOLDER=/e2e/.tmp/Cypress',
'--ipc',
'host',
'cypress/included:3.2.0',
'e2e',
]
Let's break this down....
name: docker tells Cloud Build to use the Docker Cloud Builder
--workdir /e2e tells docker to use a /e2e directory in the container during the run
--volume /workspace:/e2e points the /e2e working directory used by docker to the /workspace working directory used by cloud build
--env CYPRESS_CACHE_FOLDER=/e2e/.tmp/Cypress tells cypress to point at /e2e/.tmp/Cypress for the Cypress cache.
--ipc host fixes issues with Cypress crashing during the test run
cypress/included:3.2.0 the Cypress Docker image which includes cypress and the browsers
And if you are running your own script:
--entrypoint yarn overrides the default entrypoint in the cypress/included Dockerfile (which, remember, is cypress run)
e2e is the script you'd like to run to run e2es
Hope this helps! I spent over a week trying to get this to work so I figured I'd help out anyone else facing the same issue :)
running cypress in google cloud build works (now) fine with:
steps:
# install dependencies
- id: install-dependencies
name: node
entrypoint: yarn
args: ['install']
env:
- 'CYPRESS_CACHE_FOLDER=/workspace/.tmp/Cypress'
# run cypress
- id: run-cypress
name: cypress/included:7.0.1
entrypoint: yarn
args: ['run', 'vue-cli-service', 'test:e2e', '--headless']
env:
- 'CYPRESS_CACHE_FOLDER=/workspace/.tmp/Cypress'
options:
machineType: 'E2_HIGHCPU_8'
Note:
There is no cypress/included:latest tag, therefore the tag needs to be kept updated
uses E2_HIGHCPU_8 machine type, as the default only provides 1 vCPU and 4GB
Example args are for vue, but anything supported by the cypress/included image can be executed
Not familiar with GCB, but you probably need to do apt-get update before you can do apt-get install, try:
steps:
... npm setup ...
- name: 'ubuntu'
id: Update apt index
args:
[
'apt-get',
'update',
]
- name: 'ubuntu'
id: Install Cypress Dependencies
args:
[
'apt-get',
'install',
'xvfb',
'libgtk2.0-0',
'libnotify-dev',
'libgconf-2-4',
'libnss3',
'libxss1',
'libasound2',
]
- name: 'gcr.io/cloud-builders/npm:current'
id: End to End Test
args: ['run', 'e2e:gcb']
Also, note that you have a typo on libasound2' :)

Why isn't Kaniko able to push multi-stage Docker Image?

Building the following Dockerfile on GitLab CI using Kaniko, result in the error error pushing image: failed to push to destination eu.gcr.io/stritzke-enterprises/eliah-speech-server:latest: Get https://eu.gcr.io/...: exit status 1
If I remove the first FROM, RUN and COPY --from statements from the Dockerfile, the Docker Image is built and pushed as expected. If I execute the Kaniko build using Docker on my local machine everything works as expected. I execute other Kaniko builds and pushed on the same GitLab CI runner with the same GCE Service Account credentials.
What is going wrong with the GitLab CI based Kaniko build?
Dockerfile
FROM alpine:latest as alpine
RUN apk add -U --no-cache ca-certificates
FROM scratch
COPY --from=alpine /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY binaries/speech-server /speech-server
EXPOSE 8080
ENTRYPOINT ["/speech-server"]
CMD ["serve", "-t", "$GOOGLE_ACCESS_TOKEN"]
GitLab CI build stage
buildDockerImage:
stage: buildImage
dependencies:
- build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
variables:
GOOGLE_APPLICATION_CREDENTIALS: /secret.json
script:
- echo "$GCR_SERVICE_ACCOUNT_KEY" > /secret.json
- /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $DOCKER_IMAGE:latest -v debug
only:
- branches
except:
- master
As tdensmore pointed out this was most likely an authentication issue.
So for everyone who has come here, the following Dockerfile and Kaniko call work just fine.
FROM ubuntu:latest as ubuntu
RUN echo "Foo" > /foo.txt
FROM ubuntu:latest
COPY --from=ubuntu /foo.txt /
CMD ["/bin/cat", "/foo.txt"]
The Dockerfile can be built by running
docker run -v $(pwd):/workspace gcr.io/kaniko-project/executor:latest --context /workspace --no-push

How to pass artifact build from gitlab ci to dockerfile?

I need a way to pass a job artifact from gitlab ci to a Dockerfile, so I can copy it into a directory. What is the path where this artifact is located?
Thank you!
Steps:
use artifacts at your stage.
use dependencies to pass dependencies to the current stage.
And you can see files in Dockerfile.
For example, I run the vueJS project, the main flow:
stage1: build. Run npm run build:prod in vueJS project
build-dist:
stage: build-dist
image: node
script:
- npm run build:prod
artifacts:
paths:
- dist/
stage2: use dependencies
build-docker:
stage: build-docker
image: docker:stable
script:
- docker build
dependencies:
- build-dist
stage3: copy dist to Dockerfile
FROM fholzer/nginx-brotli
COPY ./dist /usr/share/nginx/html
COPY ./nginx.conf /etc/nginx/nginx.conf
You should use dependencies, the docs also state that job artifacts are passed to the next job by default.
The artifacts from the previous jobs will be downloaded and extracted in the context of the build.
You can use RUN --mount=type=secret
Build images with BuildKit
There is an example to show how to copy credentials into dockerfile
This dockerfile
# syntax = docker/dockerfile:experimental
RUN --mount=type=secret,id=aws,target=/root/.aws/credentials \
cat /root/.aws/credentials
This the CI
$ docker build -t test --secret id=aws,src=$HOME/.aws/credentials .