My node application needs an environment variable as requirement to make a POST request. Everything works fine in my local machine by accessing the .env file content directly. I also did make sure to set ${_TM_API_KEY} in GCP so I can trigger it, as suggested by the docs, however it doesn't seem to recognize the variable after the application is deployed. What am I doing wrong? Any further suggestion would be deeply appreciated it. My cloudbuild.yaml looks like this:
steps:
- id: build
name: gcr.io/cloud-builders/docker
args:
[
"build",
"-t",
"gcr.io/${PROJECT_ID}/suweb:${SHORT_SHA}",
"--build-arg=ENV=${_TM_API_KEY}",
".",
]
env:
- "TM_API_KEY=${_TM_API_KEY}"
- id: push
name: "gcr.io/cloud-builders/docker"
args: ["push", "gcr.io/${PROJECT_ID}/suweb:${SHORT_SHA}"]
- id: deploy
name: "gcr.io/cloud-builders/gcloud"
args:
- "run"
- "deploy"
- "suweb"
- "--set-env-vars=TM_API_KEY=${_TM_API_KEY}"
- "--image"
- "gcr.io/${PROJECT_ID}/suweb:${SHORT_SHA}"
- "--region"
- "us-central1"
- "--platform"
- "managed"
- "--allow-unauthenticated"
images:
- "gcr.io/${PROJECT_ID}/suweb:${SHORT_SHA}"
More details: the code where the API_KEY is being referenced is in the following header:
const headers = {
TM_API_KEY: process.env.TM_API_KEY,
"Content-Type": "multipart/form-data",
"Access-Control-Allow-Origin": "*",
};
And my .env file looks like this:
TM_API_KEY=_TM_API_KEY
It is a bit unclear for me if I should reference the trigger variable here as well (_TM_API_KEY) or write the key value.
After that, when doing a form post request, the server responds with a CORS policy error telling that such endpoint couldn't be reached. I tried by hard coding the API_KEY in the headers and everything works fine, no errors whatsoever.
Related
I have a GCP project where I continuously deploy changes (PRs) made to a GitHub repository to a cloud-run service using cloud build triggers
the way i set it up at first is that i use GCP GUI
this results in a trigger in cloud-build\
the cloud-build trigger has the yaml file that looks like this
- name: gcr.io/cloud-builders/docker
args:
- build
- '--no-cache'
- '-t'
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- .
- '-f'
- Dockerfile
id: Build
- name: gcr.io/cloud-builders/docker
args:
- push
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
id: Push
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk:slim'
args:
- run
- services
- update
- $_SERVICE_NAME
- '--platform=managed'
- '--image=$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- >-
--labels=managed-by=gcp-cloud-build-deploy-cloud-run,commit-sha=$COMMIT_SHA,gcb-build-id=$BUILD_ID,gcb-trigger-id=$_TRIGGER_ID,$_LABELS
- '--region=$_DEPLOY_REGION'
- '--quiet'
id: Deploy
entrypoint: gcloud
images:
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
options:
substitutionOption: ALLOW_LOOSE
substitutions:
_PLATFORM: managed
_SERVICE_NAME: bordereau
_DEPLOY_REGION: europe-west1
_LABELS: gcb-trigger-id=((a long random id goes here))
_TRIGGER_ID: ((an other long random id goes here))
_GCR_HOSTNAME: eu.gcr.io
tags:
- gcp-cloud-build-deploy-cloud-run
- gcp-cloud-build-deploy-cloud-run-managed
- bordereau
when ever this trigger is run, a new cloud-run revision is created like this
then i can create a url that points to a specific url like this
that helps me access each revision using its unique URL
i tried many ways to eddit the cloud-build YAML file to give each revision a unique URL automaticly ( not manually through the GCP GUI ) but i dont seem to find a way! i tried many keywords, and read the documentation but that didnt help either!
any help is very much appreciated.
it would be great if the revision URL (tag) was something unique and short like first charecters of the commit SHA or the PR number
Usually you can do like that (see step id: tag)
- name: gcr.io/cloud-builders/docker
args:
- build
- '--no-cache'
- '-t'
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- .
- '-f'
- Dockerfile
id: Build
- name: gcr.io/cloud-builders/docker
args:
- push
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
id: Push
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk:slim'
args:
- run
- services
- update
- $_SERVICE_NAME
- '--platform=managed'
- '--image=$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- >-
--labels=managed-by=gcp-cloud-build-deploy-cloud-run,commit-sha=$COMMIT_SHA,gcb-build-id=$BUILD_ID,gcb-trigger-id=$_TRIGGER_ID,$_LABELS
- '--region=$_DEPLOY_REGION'
- '--quiet'
id: Deploy
entrypoint: gcloud
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk:slim'
args:
- -c
- |
export sha=$COMMIT_SHA
export CUSTOM_TAG=${sha:0:8}
export CURRENT_REV=$(gcloud alpha run services describe $_SERVICE_NAME --region=$_DEPLOY_REGION --platform=managed --format='value(status.traffic[0].revisionName)')
gcloud run services update-traffic $_SERVICE_NAME --set-tags=$$CUSTOM_TAG=$$CURRENT_REV --region=$_DEPLOY_REGION --platform=managed
id: tag
entrypoint: bash
images:
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
options:
substitutionOption: ALLOW_LOOSE
substitutions:
_PLATFORM: managed
_SERVICE_NAME: bordereau
_DEPLOY_REGION: europe-west1
_LABELS: gcb-trigger-id=((a long random id goes here))
_TRIGGER_ID: ((an other long random id goes here))
_GCR_HOSTNAME: eu.gcr.io
tags:
- gcp-cloud-build-deploy-cloud-run
- gcp-cloud-build-deploy-cloud-run-managed
- bordereau
In that custom tag, I put the 8 first character of the commit SHA
You can note the weird env var COMMIT_SHA copy to a local env var. It's a strange thing with CloudBuild.
I have stored a key in the Secret manager of GCP and I'm trying to use that secret in the cloudbuild.yaml but every time I have this error:
ERROR: (gcloud.functions.deploy) argument --set-secrets: Secrets value configuration must match the pattern 'SECRET:VERSION' or 'projects/{PROJECT}/secrets/{SECRET}:{VERSION}' or 'projects/{PROJECT}/secrets/{SECRET}/versions/{VERSION}' where VERSION is a number or the label 'latest' [ 'projects/gcp-project/secrets/SECRETKEY/versions/latest' ]]
My cloud build file looks like this:
steps:
- id: installing-dependencies
name: 'python'
entrypoint: pip
args: ["install", "-r", "src/requirements.txt", "--user"]
- id: deploy-function
name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
args:
- gcloud
- functions
- deploy
- name_of_my_function
- --region=us-central1
- --source=./src
- --trigger-topic=name_of_my_topic
- --runtime=python37
- --set-secrets=[ SECRETKEY = 'projects/gcp-project/secrets/SECRETKEY/versions/latest' ]
waitFor: [ "installing-dependencies" ]
I was reading the documentation, but I don't have any other clue that could help me.
As mentioned by al-dann, there should not be any space in set-secret line as you can see the documentation
Final correction in code :
--set-secrets=[SECRETKEY='projects/gcp-project/secrets/SECRETKEY/versions/latest']
For more information, you can refer to the stackoverflow thread and blog where brief information about secret manager has been well explained.
I am trying to retrieve secrets from the secrets manager in the cloudbuild.yaml file but I can't find a way.
- name: 'gcr.io/cloud-builders/gcloud'
args:
- beta
- run
- deploy
- ${REPO_NAME}
- --region=europe-west2
- --image=gcr.io/$PROJECT_ID/${REPO_NAME}:$COMMIT_SHA
- --service-account=${_SERVICE_ACCOUNT}
- --cpu=2
- --allow-unauthenticated
- --set-env-vars=GCP_DB_INSTANCE_NAME=$$GCP_DB_INSTANCE_NAME
- --set-env-vars=PG_DATABASE=$$PG_DATABASE
- --set-env-vars=PG_PASSWORD=$$PG_PASSWORD
- --set-env-vars=PG_USER=$$PG_USER
- --set-env-vars=GCP_PROJECT=$$GCP_PROJECT
- --set-env-vars=GCP_BUCKET_NAME=$$GCP_BUCKET_NAME
- --add-cloudsql-instances=$$GCP_DB_INSTANCE_NAME
secretEnv: [ 'GCP_DB_INSTANCE_NAME', 'PG_DATABASE', 'PG_PASSWORD', 'PG_USER', 'GCP_PROJECT', 'GCP_BUCKET_NAME' ]
availableSecrets:
secretManager:
- versionName: projects/$PROJECT_ID/secrets/GCP_DB_INSTANCE_NAME/versions/latest
env: GCP_DB_INSTANCE_NAME
- versionName: projects/$PROJECT_ID/secrets/PG_DATABASE/versions/latest
env: PG_DATABASE
- versionName: projects/$PROJECT_ID/secrets/PG_PASSWORD/versions/latest
env: PG_PASSWORD
- versionName: projects/$PROJECT_ID/secrets/PG_USER/versions/latest
env: PG_USER
- versionName: projects/$PROJECT_ID/secrets/GCP_PROJECT/versions/latest
env: GCP_PROJECT
- versionName: projects/$PROJECT_ID/secrets/GCP_BUCKET_NAME/versions/latest
env: GCP_BUCKET_NAME
But the variables are not substituted. I have logged the values in my api and that is what I get:
2021-08-05T22:31:33.437926Z key value PG_DATABASE $PG_DATABASE
2021-08-05T22:31:33.437965Z key value PG_USER $PG_USER
2021-08-05T22:31:33.437985Z key value PG_PASSWORD $PG_PASSWORD
2021-08-05T22:31:33.438063Z key value GCP_PROJECT $GCP_PROJECT
2021-08-05T22:31:33.438093Z key value GCP_BUCKET_NAME $GCP_BUCKET_NAME
How can I substitute the secrets in my step?
Instead of injecting these variables at build time, it would be better to inject them at runtime. As written, the secrets will be viewable in plaintext by anyone with permission to view the Cloud Run service. That's because they are resolved during the build step and set as environment variables. Furthermore, if you were to revoke or change one of these secrets, the Cloud Run service would continue to operate with the old value.
A better solution is to use the native Cloud Run Secret Manager integration, which resolves secrets at instance boot. It would look like this:
- name: 'gcr.io/cloud-builders/gcloud'
args:
- run
- deploy
- ${REPO_NAME}
- --region=europe-west2
- --image=gcr.io/$PROJECT_ID/${REPO_NAME}:$COMMIT_SHA
- --service-account=${_SERVICE_ACCOUNT}
- --cpu=2
- --allow-unauthenticated
- --set-secrets=GCP_DB_INSTANCE_NAME=projects/$PROJECT_ID/secrets/GCP_DB_INSTANCE_NAME:latest,PG_DATABASE=projects/$PROJECT_ID/secrets/PG_DATABASE:latest // continue
- --add-cloudsql-instances=$$GCP_DB_INSTANCE_NAME
Cloud Run will automatically resolve the secrets when it boots a new instance. You'd need to grant $SERVICE_ACCOUNT permissions to access the secret.
Can you please give it a try as below?
- name: 'gcr.io/cloud-builders/gcloud'
secretEnv: [ 'GCP_DB_INSTANCE_NAME', 'PG_DATABASE', 'PG_PASSWORD', 'PG_USER', 'GCP_PROJECT', 'GCP_BUCKET_NAME' ]
entrypoint: 'bash'
args:
- -c
- |
gcloud beta run deploy ${REPO_NAME} --region=europe-west2 --image=gcr.io/$PROJECT_ID/${REPO_NAME}:$COMMIT_SHA --service-account=${_SERVICE_ACCOUNT} --cpu=2 --allow-unauthenticated --set-env-vars=GCP_DB_INSTANCE_NAME=$$GCP_DB_INSTANCE_NAME --set-env-vars=PG_DATABASE=$$PG_DATABASE --set-env-vars=PG_PASSWORD=$$PG_PASSWORD --set-env-vars=PG_USER=$$PG_USER --set-env-vars=GCP_PROJECT=$$GCP_PROJECT --set-env-vars=GCP_BUCKET_NAME=$$GCP_BUCKET_NAME --add-cloudsql-instances=$$GCP_DB_INSTANCE_NAME
According to the docs, you have to specify the flag -c in the args field so anything after it will be treated as a command.
Ref: https://cloud.google.com/build/docs/securing-builds/use-secrets
I'm using cloudbuild to deploy new version of my app when a new commit appears in github.
Everything is working good.
Now I'm trying to setup a variable substitution in the trigger configuration, because I want to put my version number in the trigger once, so that I can find the deployed correct version without modifying cloudbuild configuration file.
Variabile substitution works great in my cloudbuild file, for example:
(cloudbuild.yaml)
# TEST: PRINT VARIABLE IN LOG
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args: ['-c', 'echo', '${_VERSION}']
# DEPLOY APP
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy", "-v", "${_VERSION}", "app.yaml"]
dir: 'frontend'
timeout: "20m"
${_VERSION} is correctly replaced with the string I put into my trigger.
Now I want to obtain the same result in app.yaml file, substituting an env variabile, something like:
(app.yaml)
runtime: nodejs
env: flex
service: backend
env_variables:
VERSION: "${_VERSION}"
TEST_ENV: "read from google"
When I read TEST_ENV from my app, it works, but _VERSION is not replaced.
Any suggestion?
When you perform this step
# DEPLOY APP
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy", "-v", "${_VERSION}", "app.yaml"]
dir: 'frontend'
timeout: "20m"
The app.yaml is provided as-is to the gcloud command, and it's not evaluated. You have to update it manually. Something like this
# REPLACE: PUT THE CORRECT VALUE IN APP.YAML FILE
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args: ['-c', 'sed', "-i", "sed -i "s/\$${_VERSION}/${_VERSION}/g", 'app.yaml']
Of course if you let the
env_variables:
VERSION: "${_VERSION}"
as-is in your app.yaml file. You can change this replacement string
I want to add this solution in case someone has problems with the one proposed by giullade (in my case, cloudbuild gave me an error in executing the sed command).
I also changed my replacement string to one more readable and to avoid escaping the $ sign.
# Step 0: REPLACE variables in app.yaml file
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
dir: 'backend'
args:
- '-c'
- |
sed -i "s/__VERSION/${_VERSION}/g" app-staging.yaml
and in my app.yaml:
env_variables:
VERSION_ENV: "__VERSION"
I want to force the add of a filed in the req.body, according to the scope of the credentials. I have 2 Apps (App1 and App2), and based on who is using my API, I want to programmatically add a field in the req. So credentials of App1 has scope app1, and app2 in App2's scopes.
Moreover, I have 2 Environments, with different endpoints. Both App has access to both Ends (using different credentials). So I first choose the Env (using dev_env or my_env scope), then I verify which App is accessing (checking app1 or app2 scope).
To do that, I use expression apiEndpoint.scopes.indexOf('app1')>=0. that actually is not working, since the condition is always false. So for debugging purpose, I put the content of apiEndpoint.scopes as additional field in the req.body, to see what there is in that.
And I see that apiEndpoint.scopes has just ["my_env"], not "app1". Why?
So I have
http:
port: ${PORT:-8080}
host: ${HOST:-localhost}
apiEndpoints:
myEndpoint:
host: "*"
scopes: ["my_env"] # I explain just this one here
devEndpoint:
host: "*"
scopes: ["dev_env"]
serviceEndpoints:
myEndpoint:
url: 'https://myserver'
policies:
- basic-auth
- cors
- expression
- key-auth
- request-transformer
- rewrite
- oauth2
- proxy
- rate-limit
pipelines:
myEndpoint:
apiEndpoints:
- myEndpoint
policies:
- request-transformer:
-
condition:
name: allOf
conditions:
- # check if scope 'app1' is present. expression not working
#name: expression
#expression: "apiEndpoint.scopes.indexOf('app1')>=0"
action:
body:
add:
available_scopes: "apiEndpoint.scopes" # debug of available scopes.
And the content of req.body is
{"available_scopes": ["my_env"]}
'app1' is missing!
==== update 1
If in req.body.available_scopes field I put "consumer", I got this:
{
"type": "application",
"isActive": true,
"id": "....",
"userId": "...",
"name": "...",
"company": "...",
"authorizedScopes": [
"my_env"
]
}
So it talks about "authorizedScopes", where are the others? How could I see them?
Thanks
You have specified the scopes my_env and dev_env for the apiEndpoints myEndpoint and devEndpoint (respectively), and these are the only scopes Express Gateway expects you to care about, so the other scopes associated with the user/app credential are not exposed.
You could add the app1 and app2 scopes to each path in the config file and then act based on whichever scope is set for the credentials of the connecting app:
apiEndpoints:
myEndpoint:
host: "*"
scopes: ["my_env","app1","app2"]
devEndpoint:
host: "*"
scopes: ["dev_env","app1","app2"]