How to set the environment variable in cloudbuild.yaml file? - google-cloud-platform

I am trying to set GOOGLE_APPLICATION_CREDENTIALS. Is this correct way to set environment variable ? Below is my yaml file:
steps:
- name: 'node:10.10.0'
id: installing_npm
args: ['npm', 'install']
dir: 'API/system_performance'
- name: 'node:10.10.0'
#entrypoint: bash
args: ['bash', 'set GOOGLE_APPLICATION_CREDENTIALS=test/emc-ema-cp-d-267406-a2af305d16e2.json']
id: run_test_coverage
args: ['npm', 'run', 'coverage']
dir: 'API/system_performance'
Please help me solve this.

You can use the env step parameter
However, when you execute Cloud Build, the platform uses its own service account (in the future, it will be possible to specify the service account that you want to use)
Thus, if you grant the Cloud Build service account with the correct role, you don't need to use a key file (which is committed in your Git repository, not a really good practice!)

Related

Access Environment Variables through vite-react and GCP Cloud Build

I have a React application that is Dockerized and hosted on Google Cloud Build. I have set environment variables on the Cloud Build, but I am unable to access them within my React application. What am I doing wrong and how can I access these environment variables in my React application?
steps:
name: gcr.io/cloud-builders/docker
env:
-"VITE_PUBLIC_KEY={$_VITE_PUBLIC_KEY}",
-"VITE_SERVICE_ID={$_VITE_SERVICE_ID}",
-"VITE_TEMPLATE_ID={$_VITE_TEMPLATE_ID}"
args:
build
'--no-cache'
'-t'
'image_name'
.
'-f'
Dockerfile.prod
name: gcr.io/cloud-builders/docker
args:
push
'image_name'
name: gcr.io/cloud-builders/gcloud
args:
run
deploy
bob
'--image'
'image_name'
'--region'
$_DEPLOY_REGION
'--allow-unauthenticated'
'--platform'
$_PLATFORM
timeout: 600s
this is the yaml file:
I dont have a backend solution, I just want to be able to access 3 enviromnent variables within my application on client side. without declaring a .env file.
Tried declaring the enviroments in Cloud Run as well as declaring in the cloudbuild.yaml file. It works on aws but a different problem arises on aws.
One solution could be to hardcode the environment variables directly into your React code. This is not recommended as it could lead to security vulnerabilities and make it difficult to change the values in the future without re-deploying the entire application. Additionally, it would not be a true solution to accessing the environment variables as they would not be dynamically set at runtime.
There are different options, one and the best (in my opinion) is to store your env variables using the secret manager and then access them through your code (check this GitHub Repo).
. . .
The other option is to access the same secrets that you created before when your pipeline is running, the downside is that you always have to redeploy your pipeline to update the env variables.
This is an example:
steps:
# STEP 0 - BUILD CONTAINER 1
- id: Build-container-image-container-one
name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args:
- '-c'
- |
docker build -t gcr.io/$PROJECT_ID/container_one -f 'build/container_one.Dockerfile' .
# STEP 2 - PUSH CONTAINER 1
- id: Push-to-Container-Registry-container-one
name: 'gcr.io/cloud-builders/docker'
args:
- push
- gcr.io/$PROJECT_ID/container_one
waitFor: ["Build-container-image-container-one"]
# STEP 3 - DEPLOY CONTAINER 1
- id: Deploy-Cloud-Run-container-one
name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
- run
- deploy
- container_one
- --image=gcr.io/$PROJECT_ID/container_one
- --region={YOUR REGION}
- --port={YOUR PORT}
- --memory=3Gi
- --cpu=1
waitFor: ["Build-container-image-container-one", "Push-to-Container-Registry-container-one"]
# STEP 4 - ENV VARIABLES
- id: Accessing-secrets-for-env-variables
name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- '-c'
- |
gcloud secrets versions access latest --secret=ENV_VARIABLE_ONE > key1.txt
gcloud secrets versions access latest --secret=ENV_VARIABLE_TWO > key2.txt
gcloud secrets versions access latest --secret=ENV_VARIABLE_THREE > key3.txt
waitFor: ["Push-to-Container-Registry-container-one", "Build-container-image-container-one"]
# STEP 5 - SETTING KEYS
- id: Setting-keys
name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args: ['-c', 'gcloud run services update container_one --region={YOUR REGION} --set-env-vars="ENV_VARIABLE_ONE=$(cat key1.txt), ENV_VARIABLE_TWO=$(cat key2.txt), ENV_VARIABLE_THREE=$(cat key3.txt)"']
images:
- gcr.io/$PROJECT_ID/container_one

Google Cloud Platform: secret as build env variable

I have a few Google Functions with some private NPM packages, that I need to install during the build phase.
Credentials to NPM registries are set via .npmrc file. Token is expected to be ENV variable, as someUrlToRegistry:/_authToken=${NPM_REGISTRY_TOKEN}
I have this token saved in Secret Manager.
How can I pass this secret as a build environment variable?
I am able to do so as runtime variable, no problem there, but build does not see this secret and registry returns unauthorized response.
As per official document you can add a secretEnv field specifying the environment variable in a build.
Add an availableSecrets field to specify the secret version and environment variables to use for your secret. You can include substitution variables in the value of the secretVersion field. You can specify more than one secret in a build.
Example from doc:
steps:
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args: ['-c', 'docker login --username=$$USERNAME --password=$$PASSWORD']
secretEnv: ['PASSWORD']
availableSecrets:
secretManager:
- versionName: projects/PROJECT_ID/secrets/DOCKER_PASSWORD_SECRET_NAME/versions/DOCKER_PASSWORD_SECRET_VERSIO
env: 'PASSWORD'
Attaching a similar blog and stack link for your reference.

Gitlab Cloud run deploy successfully but Job failed

Im having an issue with my CI/CD pipeline ,
its successfully deployed to GCP cloud run but on Gitlab dashboard the status is failed.
I tried to replace images to some other docker images but it fails as well .
# File: .gitlab-ci.yml
image: google/cloud-sdk:alpine
deploy_int:
stage: deploy
environment: integration
only:
- integration # This pipeline stage will run on this branch alone
script:
- echo $GCP_SERVICE_KEY > gcloud-service-key.json # Google Cloud service accounts
- gcloud auth activate-service-account --key-file gcloud-service-key.json
- gcloud config set project $GCP_PROJECT_ID
- gcloud builds submit . --config=cloudbuild_int.yaml
# File: cloudbuild_int.yaml
steps:
# build the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build','--build-arg','APP_ENV=int' , '-t', 'gcr.io/$PROJECT_ID/tpdropd-int-front', '.' ]
# push the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'push', 'gcr.io/$PROJECT_ID/tpdropd-int-front']
# deploy to Cloud Run
- name: "gcr.io/cloud-builders/gcloud"
args: ['run', 'deploy', 'tpd-front', '--image', 'gcr.io/$PROJECT_ID/tpdropd-int-front', '--region', 'us-central1', '--platform', 'managed', '--allow-unauthenticated']
gitlab build output :
ERROR: (gcloud.builds.submit)
The build is running, and logs are being written to the default logs bucket.
This tool can only stream logs if you are Viewer/Owner of the project and, if applicable, allowed by your VPC-SC security policy.
The default logs bucket is always outside any VPC-SC security perimeter.
If you want your logs saved inside your VPC-SC perimeter, use your own bucket.
See https://cloud.google.com/build/docs/securing-builds/store-manage-build-logs.
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: exit code 1
I fix it by using:
options:
logging: CLOUD_LOGGING_ONLY
in cloudbuild.yaml
there you can use this work around :
Fix it by giving the Viewer role to the service account running this but this feels like giving too much permission to such a role.
This worked for me: Use --suppress-logs
gcloud builds submit --suppress-logs --tag=<my-tag>
To fix the issue, you just need to create a bucket in your project (by default - without public access) and add the role 'Store Admin' to your user or service account via https://console.cloud.google.com/iam-admin/iam
After that, you can refer the new bucket into the gcloud builds submit via parameter --gcs-log-dir gs://YOUR_NEW_BUCKET_NAME_HERE like this:
gcloud builds submit --gcs-log-dir gs://YOUR_NEW_BUCKET_NAME_HERE ...(other parameters here)
We need a new bucket because the default bucket for logs is a global (cross-projects). That's why it has specific security requirements to access it especially from outside the Google Cloud, like GitLab, Azure DevOps ant etc via service accounts.
(Moreover, in this case you no need to turn off logging via --suppress-logs)
Kevin's answer worked like a magic for me, since I am not able to comment, I am writing this new answer.
Initially I was facing the same issue where inspite of gcloud build submit command passed , my gitlab CI was failing.
Below is the cloudbuild.yaml file where I add the option logging as Kevin suggested.
steps:
name: gcr.io/cloud-builders/gcloud
entrypoint: 'bash'
args: ['run_query.sh', '${_SCRIPT_NAME}']
options:
logging: CLOUD_LOGGING_ONLY
Check this document for details: https://cloud.google.com/build/docs/build-config-file-schema#options
To me worked the options solution as mentioned for #Kevin. Just add the parameter as mentioned before in the cloudbuild.yml file.
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/myproject/myimage', '.']
options:
logging: CLOUD_LOGGING_ONLY

Google Cloud Build - Multiple Environments

In my app, I have the following:
app.yaml
cloudbuild.yaml
I use the above for the first time to deploy the default service.
app.qa.yaml
cloudbuild_qa.yaml
app.staging.yaml
cloudbuild_staging.yaml
app.prod.yaml
cloudbuild_prod.yaml
They all reside at the root of the app.
For instance, the cloudbuild_qa.yaml is as follows:
steps:
- name: node:14.0.0
entrypoint: npm
args: ['install']
- name: node:14.0.0
entrypoint: npm
args: ['run', 'prod']
- name: 'gcr.io/cloud-builders/gcloud'
args: ['beta', 'app', 'deploy', '--project', '$PROJECT_ID', '-q', '$_GAE_PROMOTE', '--version', '$_GAE_VERSION', '--appyaml', 'app.qa.yaml']
timeout: '3600s'
The Cloud Build works well, however, it's not respecting the app.qa.yaml instead, it always takes the default app.yaml.
Services to deploy:
descriptor: [/workspace/app.yaml]
source: [/workspace]
target project: [test-project]
target service: [default]
target version: [qa]
target url: [https://test-project.uc.r.appspot.com]
Any idea what's happening? Do you know how to use the correct app.yaml file in such a case?
Remove the '--appyaml', in the attribute list.
However, I'm not sure that is a good practice to have a deployment file different from an environment to another one. When you update something at a place, you could forget to update the same thing in the other files.
Did you think to replace placeholders in the files? or to use substitution variables in the Cloud Build?
In our build we are using:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app', 'deploy', '--appyaml=app-qa.yaml', '--no-promote', '--version=${_TAG_VERSION}']
FYI:
I've notice you are building your applications using the node builder but you could add the script gcp-build in your package.json because the script gcloud app deploy should look for scripts named gcp-build and execute them before deploying
{
"scripts": {
...
"build": "tsc",
"start": "node -r ./tsconfig-paths-dist.js dist/index.js",
"gcp-build": "npm run build"
},
}
Reference: https://cloud.google.com/appengine/docs/standard/nodejs/running-custom-build-step

env step parameter in cloudbuild.yaml file not settings environment variable

My cloudbuild.yaml file looks like
steps:
# build the container image
- name: "gcr.io/cloud-builders/docker"
args: ["build", "-t", "gcr.io/$PROJECT_ID/backend:$COMMIT_SHA", "."]
env:
- "APP_ENV=production"
# push the container image to Container Registry
- name: "gcr.io/cloud-builders/docker"
args: ["push", "gcr.io/$PROJECT_ID/backend:$COMMIT_SHA"]
# Deploy container image to Cloud Run
- name: "gcr.io/cloud-builders/gcloud"
args:
- "run"
- "deploy"
- "backend"
- "--image"
- "gcr.io/$PROJECT_ID/backend:$COMMIT_SHA"
- "--region"
- "us-central1"
- "--platform"
- "managed"
images:
- "gcr.io/$PROJECT_ID/backend:$COMMIT_SHA"
and it builds and deploys a new container to Cloud Run, however it doesn't set the APP_ENV environment variable to "production". Why is that and how do I get it to?
I am following this guide.
steps:
- env: [...]
approach sets environment variables for the Cloud Build container that runs the docker build -t command, so in this case only docker build it executes gets APP_ENV variable (and probably doesn't do anything with it).
You should not expect this to set environment variable for Cloud Run. For that to work, you need to specify --set-env-vars or --update-env-vars to Cloud Run in the gcloud run deploy step by specifying additional args above like:
- name: "gcr.io/cloud-builders/gcloud"
args:
- "run"
- "deploy"
...
- "--set-env-vars=KEY1=VALUE1"
- "--set-env-vars=KEY2=VALUE2"
...
See https://cloud.google.com/run/docs/configuring/environment-variables#command-line to learn more or read this article about alternative ways of specifying environment variables for Cloud Run applications.