Gitlab Cloud run deploy successfully but Job failed - google-cloud-platform

Im having an issue with my CI/CD pipeline ,
its successfully deployed to GCP cloud run but on Gitlab dashboard the status is failed.
I tried to replace images to some other docker images but it fails as well .
# File: .gitlab-ci.yml
image: google/cloud-sdk:alpine
deploy_int:
stage: deploy
environment: integration
only:
- integration # This pipeline stage will run on this branch alone
script:
- echo $GCP_SERVICE_KEY > gcloud-service-key.json # Google Cloud service accounts
- gcloud auth activate-service-account --key-file gcloud-service-key.json
- gcloud config set project $GCP_PROJECT_ID
- gcloud builds submit . --config=cloudbuild_int.yaml
# File: cloudbuild_int.yaml
steps:
# build the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build','--build-arg','APP_ENV=int' , '-t', 'gcr.io/$PROJECT_ID/tpdropd-int-front', '.' ]
# push the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'push', 'gcr.io/$PROJECT_ID/tpdropd-int-front']
# deploy to Cloud Run
- name: "gcr.io/cloud-builders/gcloud"
args: ['run', 'deploy', 'tpd-front', '--image', 'gcr.io/$PROJECT_ID/tpdropd-int-front', '--region', 'us-central1', '--platform', 'managed', '--allow-unauthenticated']
gitlab build output :
ERROR: (gcloud.builds.submit)
The build is running, and logs are being written to the default logs bucket.
This tool can only stream logs if you are Viewer/Owner of the project and, if applicable, allowed by your VPC-SC security policy.
The default logs bucket is always outside any VPC-SC security perimeter.
If you want your logs saved inside your VPC-SC perimeter, use your own bucket.
See https://cloud.google.com/build/docs/securing-builds/store-manage-build-logs.
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: exit code 1

I fix it by using:
options:
logging: CLOUD_LOGGING_ONLY
in cloudbuild.yaml

there you can use this work around :
Fix it by giving the Viewer role to the service account running this but this feels like giving too much permission to such a role.

This worked for me: Use --suppress-logs
gcloud builds submit --suppress-logs --tag=<my-tag>

To fix the issue, you just need to create a bucket in your project (by default - without public access) and add the role 'Store Admin' to your user or service account via https://console.cloud.google.com/iam-admin/iam
After that, you can refer the new bucket into the gcloud builds submit via parameter --gcs-log-dir gs://YOUR_NEW_BUCKET_NAME_HERE like this:
gcloud builds submit --gcs-log-dir gs://YOUR_NEW_BUCKET_NAME_HERE ...(other parameters here)
We need a new bucket because the default bucket for logs is a global (cross-projects). That's why it has specific security requirements to access it especially from outside the Google Cloud, like GitLab, Azure DevOps ant etc via service accounts.
(Moreover, in this case you no need to turn off logging via --suppress-logs)

Kevin's answer worked like a magic for me, since I am not able to comment, I am writing this new answer.
Initially I was facing the same issue where inspite of gcloud build submit command passed , my gitlab CI was failing.
Below is the cloudbuild.yaml file where I add the option logging as Kevin suggested.
steps:
name: gcr.io/cloud-builders/gcloud
entrypoint: 'bash'
args: ['run_query.sh', '${_SCRIPT_NAME}']
options:
logging: CLOUD_LOGGING_ONLY
Check this document for details: https://cloud.google.com/build/docs/build-config-file-schema#options

To me worked the options solution as mentioned for #Kevin. Just add the parameter as mentioned before in the cloudbuild.yml file.
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/myproject/myimage', '.']
options:
logging: CLOUD_LOGGING_ONLY

Related

How to setup terraform cicd with gcp and github actions in a multidirectory repository

Introduction
I have a repository with all the infrastructure defined using IaC, separated in folders. For instance, all terraform configuration is in /terraform/. I want to apply all terraform files inside that directory from the CI/CD.
Configuration
The used github action is shown below:
name: 'Terraform'
on: [push]
permissions:
contents: read
jobs:
terraform:
name: 'Terraform'
runs-on: ubuntu-latest
environment: production
# Use the Bash shell regardless whether the GitHub Actions runner is ubuntu-latest, macos-latest, or windows-latest
defaults:
run:
shell: bash
#working-directory: terraform
steps:
# Checkout the repository to the GitHub Actions runner
- name: Checkout
uses: actions/checkout#v3
# Install the latest version of Terraform CLI and configure the Terraform CLI configuration file with a Terraform Cloud user API token
- name: Setup Terraform
uses: hashicorp/setup-terraform#v1
- id: 'auth'
uses: 'google-github-actions/auth#v1'
with:
credentials_json: '${{ secrets.GCP_CREDENTIALS }}'
- name: 'Set up Cloud SDK'
uses: 'google-github-actions/setup-gcloud#v1'
# Initialize a new or existing Terraform working directory by creating initial files, loading any remote state, downloading modules, etc.
- name: Terraform Init
run: terraform init
# Checks that all Terraform configuration files adhere to a canonical format
- name: Terraform Format
run: terraform fmt -check
# On push to "master", build or change infrastructure according to Terraform configuration files
# Note: It is recommended to set up a required "strict" status check in your repository for "Terraform Cloud". See the documentation on "strict" required status checks for more information: https://help.github.com/en/github/administering-a-repository/types-of-required-status-checks
- name: Terraform Apply
run: terraform apply -auto-approve -input=false
Problem
If I log in and then change directory to apply terraform it doesn't find to log in.
storage.NewClient() failed: dialing: google: could not find default credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
On the other hand, if I don't change the directory then it doesn't find the configuration files as expected.
Error: No configuration files
Tried to move the terraform configuration files to the root of the repository and works. How could I implement it in a multidirectory repository?
Such feature was requested before. As explained in the issue, auth files is named as follows gha-creds-*.json.
Therefore, added a step just before using terraform to update the variable environment and moving the file itself:
- name: 'Setup google auth in multidirectory repo'
run: |
echo "GOOGLE_APPLICATION_CREDENTIALS=$GITHUB_WORKSPACE/terraform/`ls -1 $GOOGLE_APPLICATION_CREDENTIALS | xargs basename`" >> $GITHUB_ENV
mv $GITHUB_WORKSPACE/gha-creds-*.json $GITHUB_WORKSPACE/terraform/

Unable to push Helm Chart to Google Cloud Artifact Registry using OCI

I'm trying to push a helm chart to Google Cloud OCI registry (Artifact Registry) but I get forbidden error:
helm push testapp-1.0.0.tgz oci://europe-north1-docker.pkg.dev/project-id/my-artifact-registry/
Error: failed to authorize: failed to fetch anonymous token:
unexpected status: 403 Forbidden
It seems that I'm authenticated ok since when I do try to push it but without "oci://" it works fine:
helm chart push europe-north1-docker.pkg.dev/project-id/my-artifact-registry/charts/testapp:1.0.0
The push refers to repository [europe-north1-docker.pkg.dev/..]
ref: europe-north1-docker.pkg.dev/...
digest: 2757354aef8af2db48261d52c17c0df35a99d6fccaf016b0e67e167c391b69c7
size:3.9 KiB
name: testapp
version: 1.0.0
1.0.0: pushed to remote (1 layer, 3.9 KiB total)
I logged in to the helm registry using service account json key, using below command:
helm registry login -u _json_key_base64 --password <base_64_key> https://europe-north1-docker.pkg.dev
and this service-account has below roles:
roles/artifactregistry.admin
roles/artifactregistry.repoAdmin
roles/artifactregistry.writer
roles/container.developer
roles/storage.admin
roles/storage.objectViewer
Is there any specific permission needs to be enabled in GCP to use "OCI" protocol?
or any service need to be enabled?
or any different authentication required?
I followed the instructions here but with no success
its funny, but this is not the first time it happens to me... once I submit the question to Stackoverflow, something hits me and I'm able to find the problem with my issue!!
Anyway, the problem is basically with the authentication, where the URL to login to should be in the format of:
https://LOCATION-docker.pkg.dev/PROJECT/REPOSITORY
like this:
helm registry login -u _json_key_base64 --password <base_64_key> \
https://europe-north1-docker.pkg.dev/project-id/my-artifact-registry
I faced the same issue but using Cloudbuild.
I am glad if this snippet of code can help someone.
steps:
- name: 'alpine/helm:3.9.1'
id: 'helm package'
args: ['package', '.']
- name: 'alpine/helm:3.9.1'
id: 'helm push'
env:
- 'HELM_REGISTRY_CONFIG=../builder/home/.docker/config.json'
entrypoint: 'sh'
args:
- '-c'
- |
helm push --debug mylibchart-*.tgz oci://europe-west3-docker.pkg.dev/$PROJECT_ID/helm-registry
Basically in the step where we want to push our *.tgz., we need to set the env HELM_REGISTRY_CONFIG equal to the default path of the docker config.json.
This is kinda stupid but I was transitioning from container registry to the artifact registry and I forgot to give my service account permissions for the artifact registry which resulted in this exact error.

Is there a way to inspect the process.env variables on a cloud run service?

After deployment, is there a way to inspect the process.env variables on a running cloud run service?
I thought they would be available in the following page:
https://console.cloud.google.com/run/detail
Is there a way to make them available here? Or to inspect it in some other way?
PS: This is a Docker container.
I have the following ENV on my Dockerfile. And I know they are present, because everything is working as it should. But I cannot see them in the service details:
Dockerfile
ENV NODE_ENV=production
ENV PROJECT_ID=$PROJECT_ID
ENV SERVER_ENV=$SERVER_ENV
I'm using a cloudbuild.yaml file. The ENV directives are present in my Dockerfile, and they are being passed to my container. Maybe I should add env to my cloudbuild.yaml file? Because I'm using --substitutions on my gcloub builds sumbmit call and they are passed as --build-arg to my Docker build step. But I'm not declaring them as env in my cloudbuild.yaml.
I followed the official documentation and set the environment variables on a Cloud Run service using the console.Then I was able to list them on the Google Cloud Console.
You can set environment variables using the Cloud Console, the gcloud
command line, or a YAML file when you create a new service or deploy a
new revision:
With the help of #marian.vladoi's answer. This what I've ended up doing
In my deploy step from cloudbuild.yaml file:
I added the --set-env-vars parameter
steps:
# DEPLOY CONTAINER WITH GCLOUD
- name: "gcr.io/google.com/cloudsdktool/cloud-sdk"
entrypoint: gcloud
args:
- "beta"
- "run"
- "deploy"
- "SERVICE_NAME"
- "--image=gcr.io/$PROJECT_ID/SERVICE_NAME:$_TAG_NAME"
- "--platform=managed"
- "--region=us-central1"
- "--min-instances=$_MIN_INSTANCES"
- "--max-instances=3"
- "--set-env-vars=PROJECT_ID=$PROJECT_ID,SERVER_ENV=$_SERVER_ENV,NODE_ENV=production"
- "--port=8080"
- "--allow-unauthenticated"
timeout: 180s

Google Cloud Build error no project active

I am trying to set up Google Cloud Build with a really simple project hosted on firebase, but every time it reaches the deploy stage it tells me:
Error: No project active, but project aliases are available.
Step #2: Run firebase use <alias> with one of these options:
ERROR: build step 2 "gcr.io/host-test-xxxxx/firebase" failed: step exited with non-zero status: 1
I have set the alias to production and my .firebasesrc is:
{
"projects": {
"default": "host-test-xxxxx",
"production": "host-test-xxxxx"
}
I have firebase admin and API Keys Admin permissions on my cloud builder service and I also want to encrypt so I have Cloud KMS CryptoKey Decrypter
I do
firebase login:ci
to generate a token in my terminal and paste this to my .env variable, then generate an alias called production and do
firebase use production
My yaml is:
steps:
# Install
- name: 'gcr.io/cloud-builders/npm'
args: ['install']
# Build
- name: 'gcr.io/cloud-builders/npm'
args: ['run', 'build']
# Deploy
- name: 'gcr.io/host-test-xxxxx/firebase'
args: ['deploy']
and install and build work fine. What is happening here?
Rerunning firebase init does not seem to help.
Update:
building locally then doing firebase deploy does not help either.
Ok the thing that worked was changing the .firebasesrc file to:
{
"projects": {
"default": "host-test-xxxxx"
}
}
and
firebase use --add
and adding an alias called default.

Cannot run Azure CLI task on yaml build

I'm starting to lose my sanity over a yaml build. This is the very first yaml build I've ever tried to configure, so it's likely I'm doing some basic mistake.
This is my yaml build definition:
name: ops-tools-delete-failed-containers-$(Date:yyyyMMdd)$(Rev:.rrrr)
trigger:
branches:
include:
- master
- features/120414-delete-failed-container-instances
schedules:
- cron: '20,50 * * * *'
displayName: At minutes 20 and 50
branches:
include:
- features/120414-delete-failed-container-instances
always: 'true'
pool:
name: Standard-Windows
variables:
- name: ResourceGroup
value: myResourceGroup
stages:
- stage: Delete
displayName: Delete containers
jobs:
- job: Job1
steps:
- task: AzureCLI#2
inputs:
azureSubscription: 'CPA (Infrastructure) (xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx)'
scriptType: 'pscore'
scriptLocation: 'scriptPath'
scriptPath: 'General/Automation/ACI/Delete-FailedContainerInstances.ps1'
arguments: '-ResourceGroup $(ResourceGroup)'
So in short, I want to run a script using an Azure CLI task. When I queue a new build it stays like this forever:
I've tried running the same task with an inline script without success. The same thing happens if I try to run a Powershell task instead of an Azure CLI task.
What am I missing here?
TL;DR issue was caused by (lack of) permissions.
More details
After enabling the following feature I could see more details about the problem:
The following warning was shown after enabling the feature:
Clicking on View shows the Azure subscription used in the Azure CLI task. After clicking on Permit, everything works as expected.
Cannot run Azure CLI task on yaml build
Your YAML file should be correct. I have test your YAML in my side, it works fine.
The only place I modified is change the agent pool with my private agent:
pool:
name: MyPrivateAgent
Besides, according to the state in your image:
So, It seems your private agent under the agent queue which you specified for the build definition is not running:
Make the agent running, then the build will start.
As test, you could use the hosted agent instead of your private agent, like:
pool:
vmImage: 'ubuntu-latest'
Hope this helps.