I'm trying to deploy to gcp secrets stored as variables in azure devops.
But when I try the command
gcloud run deploy --update-secrets=myvar=$(myvar)
as stated in the official documentation gcloud run deploy --update-secrets=[KEY=VALUE,…] , it throws this error:
ERROR: (gcloud.run.deploy) No secret version specified for myvar. Use myvar:latest to reference the latest version.
##[error]Cmd.exe exited with code '1'.
Given that the pipeline correctly recognizes $(myvar). And given that changing gcloud run deploy --update-secrets=myvar:latest=$(myvar) won't give effects.
How should I release this secret?
Thanks
The square brackets are just indications that you can provide multiple values, separated by a comma.
So this should work:
gcloud run deploy --update-secrets="myvar=$(myvar)"
A thing that worked was running
gcloud run deploy --update-secrets = myvar=$(myvar):latest
Documentation is available here: https://cloud.google.com/run/docs/configuring/secrets#command-line
Related
This is a strange one.
A Google Cloud Run deployment run from gcloud commandline on my OSx Mac works — while the identical command run from the identical gcloud version, using a Service Account user within our Alpine based Ci/Cd Gitlab runner container / executor crashes and complains about un-recognized arguments.
With the arguments copied and pasted why is the gcloud (within Alpine gitlab runner / executor container) failing due to not recognizing the arguments where my local install works fine?
As background:
We run Ci/Cd within a Gitlab Ci Runner where the docker executor that deploys our final container previously needed to use Kubectl to push that container to a GCP Managed K8s Cluster — which was expensive. So we moved the production container to Cloud Run — and it was cheaper.
Now I am working on resetting our Ci/CD deployments and ran into the above issue while attempting to deploy a container from within our GitLab Ci pipeline.
The gcloud command that works looks like this (on my local Mac)
gcloud run deploy site-production \
--platform=managed \
--allow-unauthenticated \
--image=us.gcr.io/some-site-333333/site:master \
--region=us-east1
That same (EXACT) command on the GitLab runner gets me:
ERROR: (gcloud.run.deploy) unrecognized arguments:
--platform=managed
--allow-unauthenticated
--image=us.gcr.io/some-site-333333/site:master
--region=us-east1
To search the help text of gcloud commands, run:
gcloud help -- SEARCH_TERMS
Seems super weird — and I was pretty sure I must have had a typo or something — but the command itself was copied (and modified) from Google's own Cloud Run docs.
If I am missing something dumb let me know — until then my plan is to start shaving off optional flags to try to see which one of those parameters it's complaining about. Ideas are appreciated!
Try to make a one liner command like:
gcloud run deploy site-production --platform=managed --allow-unauthenticated --image=us.gcr.io/some-site-333333/site:master --region=us-east1
I'm trying to create a Compute Engine VM instance sample in Google Cloud that has an associated startup script startup_script.sh. On startup, I would like to have access to files that I have stored in a Cloud Source Repository. As such, in this script, I clone a repository using
gcloud source repos clone <repo name> --project=<project name>
Additionally, startup_script.sh also runs commands such as
gcloud iam service-accounts keys create key.json --iam-account <account>
which creates .json credentials, and
EXTERNAL_IP = $(gcloud compute instances describe sample --format='get(networkInterfaces[0].accessConfigs[0].natIP)' --zone=us-central1-a)
to get the external IP of the VM within the VM. To run these commands without any errors, I found that I need partial or full access to multiple Cloud API access scopes.
If I manually edit the scopes of the VM after I've already created it to allow for this and restart it, startup_script.sh runs fine, i.e. I can see the results of each command completing successfully. However, I would like to assign these scopes upon creation of the VM and not have to manually edit scopes after the fact. I found in the documentation that in order to do this, I can run
gcloud compute instances create sample --image-family=ubuntu-1804-lts --image-project=ubuntu-os-cloud --metadata-from-file=startup-script=startup_script.sh --zone=us-central1-a --scopes=[cloud-platform, cloud-source-repos, default]
When I run this command in the Cloud Shell, however, I can either only add one scope at a time, i.e. --scopes=cloud_platform, or if I try to enter multiple scopes as shown in the command above, I get
ERROR: (gcloud.compute.instances.create) unrecognized arguments:
cloud-source-repos,
default]
Adding multiple scopes as the documentation suggests doesn't seem to work. I get a similar error when use the scope's URI instead of it's alias.
Any obvious reasons as to why this may be happening? I feel this may have to do with the service account (or lack thereof) associated with the sample VM, but I'm not entirely familiar with this.
BONUS: Ideally I would like to run the VM creation cloud shell command in a cloudbuild.yaml file, which I have as
steps:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: gcloud
args: ['compute', 'instances', 'create', 'sample', '--image-family=ubuntu-1804-lts', '--image-project=ubuntu-os-cloud', '--metadata-from-file=startup-script=startup_sample.sh', '--zone=us-central1-a', '--scopes=[cloud-platform, cloud-source-repos, default]']
I can submit the build using
gcloud builds submit --config cloudbuild.yaml .
Are there any issues with the way I've setup this cloudbuild.yaml?
Adding multiple scopes as the documentation suggests doesn't seem to work
Please use the this command with --scopes=cloud-platform,cloud-source-reposCreated and not --scopes=[cloud-platform, cloud-source-repos, default]:
gcloud compute instances create sample --image-family=ubuntu-1804-lts --image-project=ubuntu-os-cloud --zone=us-central1-a --scopes=cloud-platform,cloud-source-reposCreated
[https://www.googleapis.com/compute/v1/projects/wave25-vladoi/zones/us-central1-a/instances/sample].
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
sample us-central1-a n1-standard-1 10.128.0.17 35.238.166.75 RUNNING
Also consider #John Hanley comment.
I'm trying to setup credentials for kubernetes on my local.
gcloud container clusters get-credentials ***** --zone **** --project elo-project-267109
This query works fine when I tried it from cloud shell, but I got this error when I tried run it from my terminal:
ERROR: (gcloud.container.clusters.get-credentials) get-credentials requires edit permission on elo-project-267109
I've tried this query from admin account as well as default service account also from new service account by assigning editor role and it still doesn't seem to work for me.
i am using macOs Mojave(10.14.6) and gcloud SDK version installed in my system is 274.0.1
i was able to resolve this issue on my local but i was actually trying to build a CI/CD from gitlab and the issue persists there, i have tried using gcloud(279.0.0) image version.
i am new to both gitlab and gcloud. i am trying to build CI/CD pipeline for the first time.
Do gcloud auth list to see which account are you logged into.
You need to login with the account which has the correct credentials to access the action that you're trying to perform.
To set the gcloud account: gcloud config set account <ACCOUNT>
It's turned out to be the image version mismatch issue on GitLab.
I'm working on Cloud Run, which seems to be beta yet, preventing from redeploying as shown below. It works if I delete the service from GCP console, then deploy the same Docker as a new service. I could not find a way to to set revisionTemplate.
I run this command to deploy a Cloud Run service using gcloud.
gcloud beta run deploy v2-cms --image gcr.io/my-project/v2-cms --quiet
Then, it fails saying like this.
X Deploying...
. Creating Revision...
. Routing traffic...
Deployment failed
ERROR: gcloud crashed (AttributeError): 'NoneType' object has no attribute 'revisionTemplate'
If you would like to report this issue, please run the following command:
gcloud feedback
To check gcloud for common problems, please run the following command:
gcloud info --run-diagnostics
To fix this issue, please update gcloud to ite latest version with gcloud components update
Make sure that your local Tensorflow version is still supported by GCloud https://cloud.google.com/ai-platform/training/docs/runtime-version-list
I've updated gcloud to the latest version (159.0.0)
I created a Google Container Engine node, and then followed the instructions in the prompt.
gcloud container clusters get-credentials prod --zone us-west1-b --project myproject
Fetching cluster endpoint and auth data.
kubeconfig entry generated for prod
kubectl proxy
Unable to connect to the server: error executing access token command
"/Users/me/Code/google-cloud-sdk/bin/gcloud ": exit status
Any idea why is it not able to connect?
You can try to run to see if the config was generated correctly:
kubectl config view
I had a similar issue when trying to run kubectl commands on a new Kubernetes cluster just created on Google Cloud Platform.
The solution for my case was to activate Google Application Default Credentials.
You can find a link below on how to activate it.
Basically, you need to set an environmental variable to the path of the .json with the credentials from GCP
GOOGLE_APPLICATION_CREDENTIALS -> c:\...\..\..Credentials.json exported from Google Cloud
https://developers.google.com/identity/protocols/application-default-credentials
I found this solution on a kuberenetes github issue: https://github.com/kubernetes/kubernetes/issues/30617
PS: make sure you have also set the environmental variables for:
%HOME% to %USERPROFILE%
%KUBECONFIG% to %USERPROFILE%
It looks like the default auth plugin for GKE might be buggy on windows. kubectl is trying to run gcloud to get a token to authenticate to your cluster. If you run kubectl config view you can see the command it tried to run, and run it yourself to see if/why it fails.
As Alexandru said, a workaround is to use Google Application Default Credentials. Actually, gcloud container has built in support for doing this, which you can toggle by setting a property:
gcloud config set container/use_application_default_credentials true
or set environment variable
%CLOUDSDK_CONTAINER_USE_APPLICATION_DEFAULT_CREDENTIALS% to true.
Using GKE, update the credentials from the "Kubernetes Engine/Cluster" management worked for me.
The cluster line provides "Connect" button that copy the credentials commands into console. And this refresh the used token. And then kubectl works again.
Why my token expired? well, i suppose GCP token are not eternal.
So, the button plays the same command automatically that :
gcloud container clusters get-credentials your-cluster ...
Bruno