Google Cloud Build show no logs - google-cloud-platform

I have a Google Cloud Trigger that triggers cloud build on Github push.
The problem is that the Cloud Build shows no logs. I followed this doc but can not find any logs on neither the Cloud Build log nor the Logs Explorer (see the image below)
This is my cloudbuild.yaml
steps:
# install dependencies
- name: node:16
entrypoint: yarn
args: []
# create .env file
- name: 'ubuntu'
args: ['bash', './makeEnv.sh']
env:
- 'GCP_SHOPIFY_STOREFRONT_ACCESS_TOKEN=$_GCP_SHOPIFY_STOREFRONT_ACCESS_TOKEN'
- 'GCP_SHOPIFY_DOMAIN=$_GCP_SHOPIFY_DOMAIN'
# build code
- name: node:16
entrypoint: yarn
args: ["build"]
# deploy to gcp
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: 'bash'
args: ['-c', 'gcloud config set app/cloud_build_timeout 1600 && gcloud app deploy --promote']
timeout: "1600s"
options:
logging: CLOUD_LOGGING_ONLY
The build failed but it actually create a subsequence App Engine build that successfully deploy a version to App Engine. But that version is not auto-promoted (see the image below)

I do not have all the details, so trying to help with all the above information mentioned.
As I can see you are using CLOUD_LOGGING_ONLY and not been able to see the log in the log explorer and considering you have all the permissions to access the logs.
I would suggest you to look into the service account that you are using for cloud build must at least have the role:
role/logging.logWriter or permission:logging.logEntries.create permission if it is not the default cloud build SA project-number#cloudbuild.gserviceaccount.com.
Hope this helps :)

In my case, looking at the Google Cloud Build Service Account (project-number#cloudbuild.gserviceaccount.com) in the Google Cloud IAM console, it was missing the role Cloud Build Service Account. I was also missing logs.
This fixed symptom of a cloud function deploy with the message:
(gcloud.functions.deploy) OperationError: code=3, message=Build failed: {
"metrics":{},
"error":{
"buildpackId":"",
"buildpackVersion":"",
"errorType":"OK",
"canonicalCode":"OK",
"errorId":"",
"errorMessage":""
}
}

Related

403 trying to run terraform from Gitlab without json file

After a pile of troubleshooting, I managed to get my gitlab CICD pipeline to connect to GCP without requiring my service account to use a JSON key. However, I'm unable to do anything with Terraform in my pipeline using a remote statefile because of the following error:
Error: Failed to get existing workspaces: querying Cloud Storage failed: googleapi: Error 403: Insufficient Permission, insufficientPermissions
My gitlab-ci.yml file is defined as follows:
stages:
- auth
- validate
gcp-auth:
stage: auth
image: google/cloud-sdk:slim
script:
- echo ${CI_JOB_JWT_V2} > .ci_job_jwt_file
- gcloud iam workload-identity-pools create-cred-config ${GCP_WORKLOAD_IDENTITY_PROVIDER}
--service-account="${GCP_SERVICE_ACCOUNT}"
--output-file=.gcp_temp_cred.json
--credential-source-file=.ci_job_jwt_file
- gcloud auth login --cred-file=`pwd`/.gcp_temp_cred.json
- gcloud auth list
tf-stuff:
stage: validate
image:
name: hashicorp/terraform:light
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
before_script:
- export TF_LOG=DEBUG
- cd terraform
- rm -rf .terraform
- terraform --version
- terraform init
script:
- terraform validate
My gcp-auth job is running successfully from what I can see:
Authenticated with external account credentials for: [[MASKED]].
I've also went as far as adding in a gsutil cp command inside the gcp-auth job to make sure I can access the desired bucket as expected, which I can. I can successfully edit the contents of the bucket where my terraform statefile is stored.
I'm fairly new to gitlab CICD pipelines. Is there something I need to do to have the gcp-auth job tied to the tf-stuff job? It's like that job does not know the pipeline was previously authenticated using the service account.
Thanks!
Like mentioned by other posters, gitlab jobs run independently and dont share env variables or filesystem. So to preserve login state betwen jobs you have to preserve the state somehow.
I wrote a blog with a working example: https://ael-computas.medium.com/gcp-workload-identity-federation-on-gitlab-passing-authentication-between-jobs-ffaa2d51be2c
I have done it like github actions is doing it, by storing (tmp) credentials as artifacts. By setting correct env variables you should be able to "keep" the logged in state (gcp will implicitly refresh your token) without you having to create a base image containing everything. All jobs must run the gcp_auth_before method, or extend the auth job for this to work. and also have _auth/ artifacts preserved between jobs
In the sample below you can see that login state is preserved over two jobs, but only actuallt signing in on the first one. I have used this together with terraform images for further steps and it works like a charm so far.
This is very early so there might be hardening required for production.
Hope this example gives you some ideas on how to solve this!
.gcp_auth_before: &gcp_auth_before
- export GOOGLE_APPLICATION_CREDENTIALS=$CI_PROJECT_DIR/_auth/.gcp_temp_cred.json
- export CLOUDSDK_AUTH_CREDENTIAL_FILE_OVERRIDE=$CI_PROJECT_DIR/_auth/.gcp_temp_cred.json
- export GOOGLE_GHA_CREDS_PATH=$CI_PROJECT_DIR/_auth/.gcp_temp_cred.json
- export GOOGLE_CLOUD_PROJECT=$(cat $CI_PROJECT_DIR/_auth/.GOOGLE_CLOUD_PROJECT)
- export CLOUDSDK_PROJECT=$(cat $CI_PROJECT_DIR/_auth/.GOOGLE_CLOUD_PROJECT)
- export CLOUDSDK_CORE_PROJECT=$(cat $CI_PROJECT_DIR/_auth/.GOOGLE_CLOUD_PROJECT)
- export GCP_PROJECT=$(cat $CI_PROJECT_DIR/_auth/.GOOGLE_CLOUD_PROJECT)
- export GCLOUD_PROJECT=$(cat $CI_PROJECT_DIR/_auth/.GOOGLE_CLOUD_PROJECT)
.gcp-auth:
artifacts:
paths:
- _auth/
before_script:
*gcp_auth_before
stages:
- auth
- debug
auth:
stage: auth
image: "google/cloud-sdk:slim"
variables:
SERVICE_ACCOUNT_EMAIL: "... service account email ..."
WORKLOAD_IDENTITY_PROVIDER: "projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL/providers/PROVIDER"
GOOGLE_CLOUD_PROJECT: "... project id ...."
artifacts:
paths:
- _auth/
script:
- |
mkdir -p _auth
echo "$CI_JOB_JWT_V2" > $CI_PROJECT_DIR/_auth/.ci_job_jwt_file
echo "$GOOGLE_CLOUD_PROJECT" > $CI_PROJECT_DIR/_auth/.GOOGLE_CLOUD_PROJECT
gcloud iam workload-identity-pools create-cred-config \
$WORKLOAD_IDENTITY_PROVIDER \
--service-account=$SERVICE_ACCOUNT_EMAIL \
--service-account-token-lifetime-seconds=600 \
--output-file=$CI_PROJECT_DIR/_auth/.gcp_temp_cred.json \
--credential-source-file=$CI_PROJECT_DIR/_auth/.ci_job_jwt_file
gcloud config set project $GOOGLE_CLOUD_PROJECT
- "export GOOGLE_APPLICATION_CREDENTIALS=$CI_PROJECT_DIR/_auth/.gcp_temp_cred.json"
- "gcloud auth login --cred-file=$GOOGLE_APPLICATION_CREDENTIALS"
- gcloud auth list # DEBUG!!
debug:
extends: .gcp-auth
stage: debug
image: "google/cloud-sdk:slim"
script:
- env
- gcloud auth list
- gcloud storage ls
Your two Gitlab job run on a separated pod for the Kubernetes runner.
The tf-stuff job doesn't see the authentication done in the job gcp-auth.
To solve this issue, you can add the authentication code logic in a separated Shell script, then reuse this script in the two Gitlab jobs, example :
Authentication Shell script gcp_authentication.sh :
echo ${CI_JOB_JWT_V2} > .ci_job_jwt_file
gcloud iam workload-identity-pools create-cred-config ${GCP_WORKLOAD_IDENTITY_PROVIDER}
--service-account="${GCP_SERVICE_ACCOUNT}"
--output-file=.gcp_temp_cred.json
--credential-source-file=.ci_job_jwt_file
gcloud auth login --cred-file=`pwd`/.gcp_temp_cred.json
gcloud auth list
# Check if you need to set GOOGLE_APPLICATION_CREDENTIALS env var on `pwd`/.gcp_temp_cred.json
For the tf-stuff, you can create a custom Docker image containing gcloud and Terraform because the image hashicorp/terraform doesn't contains gcloud cli natively.
Your Docker image can be added in Gitlab registry
Your Gitlab yml file :
stages:
- auth
- validate
gcp-auth:
stage: auth
image: google/cloud-sdk:slim
script:
- . ./gcp_authentication.sh
tf-stuff:
stage: validate
image:
name: yourgitlabregistry/your-custom-image:1.0.0
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
before_script:
- . ./gcp_authentication.sh
- export TF_LOG=DEBUG
- cd terraform
- rm -rf .terraform
- terraform --version
- terraform init
script:
- terraform validate
Some explanations :
The same Shell script has been used in the 2 Gitlab jobs : gcp_authentication.sh
A custom Docker image has been created with Terraform and gcloud cli in the job concerning the Terraform part. This image can be added to the Gitlab registry
In the authentication Shell script, check if you need to set GOOGLE_APPLICATION_CREDENTIALS env var on pwd/.gcp_temp_cred.json
You have to give the needed permission to your Service Account to use Gitlab with Workload Identity :
roles/iam.workloadIdentityUser
You can check this example project and the documentation

How to use cloud build to deploy cloud run with cloud sql on google cloud?

My cloudbuild.yaml file
(I have built a docker image and pushed it to gcr)
This application using mysql on Cloud SQL. So needs to connect to it.
steps:
- id: cloud-run
name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
- 'run'
- 'deploy'
- 'my-service'
- '--image'
- 'asia.gcr.io/$_PROJECT_ID/my-service:$_COMMIT_SHA'
- '--region'
- 'asia-northeast1'
- '--platform'
- 'managed'
- '--service-account'
- '$_CLOUD_RUN_PUBSUB_INVOKER'
- '--add-cloudsql-instances'
- '$_MYSQL_MAIN_INSTANCE_NAME'
- '--set-env-vars'
- 'MYSQL_MAIN_CONNECTIONS=$_MYSQL_MAIN_CONNECTIONS'
- '--set-env-vars'
- 'MYSQL_MAIN_INSTANCE_NAME=$_MYSQL_MAIN_INSTANCE_NAME'
- '--set-env-vars'
- 'MYSQL_MAIN_DB=$_MYSQL_MAIN_DB'
- '--set-env-vars'
- 'MYSQL_MAIN_USER=$_MYSQL_MAIN_USER'
- '--set-env-vars'
- 'MYSQL_MAIN_PASSWORD_SECRET_ID=$_MYSQL_MAIN_PASSWORD_SECRET_ID'
- '--set-env-vars'
When ran build to submit, got Cloud SQL API not activated error
$ gcloud builds submit
Creating temporary tarball archive of 5 file(s) totalling 47.4 KiB before compression.
Uploading tarball of [.] to [gs://my-project_cloudbuild/source/1610067564.911628-8d7f3de581ca4b8faa57bd5a8ea75ef1.tgz]
Created [https://cloudbuild.googleapis.com/v1/projects/my-project/locations/global/builds/b4e1bf9c-bc06-4ce8-b252-3b34f164719d].
Logs are available at [https://console.cloud.google.com/cloud-build/builds/b4e1bf9c-bc06-4ce8-b252-3b34f164719d?project=421686839359].
---------------------------------------------------------------------------------------------- REMOTE BUILD OUTPUT -----------------------------------------------------------------------------------------------
starting build "b4e1bf9c-bc06-4ce8-b252-3b34f164719d"
FETCHSOURCE
Fetching storage object: gs://my-project_cloudbuild/source/1610067564.911628-8d7f3de581ca4b8faa57bd5a8ea75ef1.tgz#1610067566084932
Copying gs://my-project_cloudbuild/source/1610067564.911628-8d7f3de581ca4b8faa57bd5a8ea75ef1.tgz#1610067566084932...
/ [1 files][ 17.1 KiB/ 17.1 KiB]
Operation completed over 1 objects/17.1 KiB.
BUILD
Pulling image: gcr.io/google.com/cloudsdktool/cloud-sdk
Using default tag: latest
latest: Pulling from google.com/cloudsdktool/cloud-sdk
6c33745f49b4: Already exists
...
ffa0764d79dc: Pull complete
Digest: sha256:3f32cb39cdfe8902bc85e31111a9f1bc7cbd9d37f31c6164f2b41cfdaa66284f
Status: Downloaded newer image for gcr.io/google.com/cloudsdktool/cloud-sdk:latest
gcr.io/google.com/cloudsdktool/cloud-sdk:latest
Skipped validating Cloud SQL API and Cloud SQL Admin API enablement due to an issue contacting the Service Usage API. Please ensure the Cloud SQL API and Cloud SQL Admin API are activated (see https://console.cloud.google.com/apis/dashboard).
ERROR: (gcloud.run.deploy) PERMISSION_DENIED: The caller does not have permission
ERROR
ERROR: build step 0 "gcr.io/google.com/cloudsdktool/cloud-sdk" failed: step exited with non-zero status: 1
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ERROR: (gcloud.builds.submit) build b4e1bf9c-bc06-4ce8-b252-3b34f164719d completed with status "FAILURE"
I have checked dashboard https://console.cloud.google.com/apis/dashboard, both Cloud SQL API and Cloud SQL Admin API are activated.
I also ran permission setting by https://cloud.google.com/cloud-build/docs/deploying-builds/deploy-cloud-run#continuous-iam
gcloud iam service-accounts add-iam-policy-binding \
PROJECT_NUMBER-compute#developer.gserviceaccount.com \
--member="serviceAccount:PROJECT_NUMBER#cloudbuild.gserviceaccount.com" \
--role="roles/iam.serviceAccountUser"
But still the same error.
It seems the error is about the IAM permission.
PERMISSION_DENIED: The caller does not have permission
You also need to follow the Required IAM permission steps in this document:
To deploy to Cloud Run (fully managed) grant the Cloud Run Admin and Service Account User roles to the Cloud Build service account:
In the Cloud Console, go to the Cloud Build Settings page:
Open the Settings page
In the Service account permissions panel, set the status of the Cloud Run Admin role to ENABLED:
In the Additional steps may be required pop-up, you click Skip or click GRANT ACCESS TO ALL SERVICE ACCOUNTS.

Google cloud run build permission denied

When trying to gcloud builds submit --tag gcr.io/********/*** in order to build a container image, I get a:
ERROR: (gcloud.builds.submit) HTTPError 403: Insufficient Permission
I am trying this from a compute VM instance, where gcloud is set up with the service account.
The service account has the following roles:
Cloud Build Service Account,
Cloud Build Editor,
Cloud Scheduler Job Runner,
Cloud SQL Admin,
Editor,
Organization Administrator,
Cloud Run Admin,
Cloud Run Invoker,
Cloud Run Service Agent,
If anyone has any idea why I am getting denied, help woul be greatly appreciated.
If your using gcloud cli.
Please Verify Your Gcloud auth is Using Services Account.
Then try again gcloud builds submit --tag gcr.io/********/***
If you use Google Cloud Build :
Add google cloud steps to your cloudbuild.yml
steps:
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args:
- '-c'
- 'docker pull gcr.io/$PROJECT_ID/$_APP_NAME:latest || exit 0'
- name: gcr.io/cloud-builders/docker
args:
- 'build'
- '-t'
- 'gcr.io/$PROJECT_ID/$_APP_NAME:latest'
- '.'
- name: gcr.io/cloud-builders/docker
args:
- 'push'
- 'gcr.io/$PROJECT_ID/$_APP_NAME:latest'
images:
- 'gcr.io/$PROJECT_ID/$_APP_NAME'
timeout: 1200s
substitutions:
_APP_NAME: 'app_examples'
Reference : https://cloud.google.com/cloud-build/docs/running-builds/start-build-manually

Cloud build permission denied when deploy to cloud run with "--set-sql-instance" argument

I'm trying to configure cloud build triggers which build maven springboot project and then deploy to cloud runs. I run into a problem where it works when i don't specify the cloud sql instance to be connected with, but when I add "--set-cloudsql-instances", "${_DATABASE_CONNECTION_NAME}" as one of the args, it throws error on cloud build as follows:
Step #1: ERROR: (gcloud.beta.run.deploy) PERMISSION_DENIED: The caller does not have permission
Finished Step #1
ERROR
ERROR: build step 1 "gcr.io/cloud-builders/gcloud" failed: exit status 1
Following is my cloudbuild.yml
steps:
- name: 'gcr.io/kaniko-project/executor:latest'
args:
- --destination=gcr.io/$PROJECT_ID/${_IMAGE_NAME}
- --cache=true
- name: 'gcr.io/cloud-builders/gcloud'
args: [
"beta", "run",
"deploy", "${_SERVICE_NAME}-${_PROFILE}",
"--image", "gcr.io/${PROJECT_ID}/${_IMAGE_NAME}",
"--region", "${_REGION}",
"--platform", "managed",
"--set-cloudsql-instances", "${_DATABASE_CONNECTION_NAME}",
"--allow-unauthenticated",
"--set-env-vars", "SPRING_PROFILES_ACTIVE=${_SPRING_PROFILE},DATABASE_CONNECTION_NAME=${_DATABASE_CONNECTION_NAME},DATABASE_NAME=${_DATABASE_NAME},DATABASE_USERNAME=${_DATABASE_USERNAME},DATABASE_PASSWORD=${_DATABASE_PASSWORD},MINIO_ACCESS_KEY=${_MINIO_ACCESS_KEY},MINIO_SECRET_KEY=${_MINIO_SECRET_KEY},MINIO_HOSTNAME=${_MINIO_HOSTNAME},MINIO_PORT=${_MINIO_PORT}"
]
images:
- gcr.io/${PROJECT_ID}/${_IMAGE_NAME}
and I already set roles/permission for service account as follow:
{PROJECT_ID}-compute#developer.gserviceaccount.com : Editor, Cloud Sql Client <-- Default SA
<Cloud run service agent> : Cloud Run Service Agent, Cloud SQL Client
<Cloud Build SA> : Cloud Build SA, Cloud Run Admin
My Cloud Run service also use default service account as its SA
Make sure you've also given the Cloud Build Service Account the iam.serviceAccountUser role, allowing it to impersonate the Cloud Run runtime service account during the build.
gcloud iam service-accounts add-iam-policy-binding
PROJECT_NUMBER-compute#developer.gserviceaccount.com
--member="serviceAccount:PROJECT_NUMBER#cloudbuild.gserviceaccount.com"
--role="roles/iam.serviceAccountUser"
See Cloud Run deployment permissions for more info.
I am using a service account to deploy a cloud run function with sql connections. I found that the service account needs the following permissions:
serviceusage.quotas.get
serviceusage.services.get
serviceusage.services.list

gcloud app deploy --no-promote fails in google cloud build without any logs

gcloud app deploy --no-promote fails in Google Cloud Build without any logs. My cloudbuild.yaml looks like
steps:
- name: 'gcr.io/cloud-builders/mvn'
args: ['package']
- name: "gcr.io/cloud-builders/gcloud"
args: ['app', 'deploy', '--no-promote']
timeout: "1600s"
The build step passes but the 2nd step fails immediately without any error message or logs. I am able to run the same command locally and it deploys but it fails in cloud build.