I would like to deploy an application (as a container image) to Google Cloud Run. I am following the documentation as below:
https://cloud.google.com/run/docs/quickstarts/build-and-deploy
I would like to set the region as Tokyo (asia-northeast1) for the following commands:
gcloud builds submit
gcloud run deploy
The reason is that Cloud Run and Cloud Storage costs depends on the region. I would like to set the location of Cloud Storage and Cloud Run.
When creating a service in Cloud Run Console there's a region dropdown in Service setting see the image below :
you can also use the gcloud command to specify the region:
gcloud run deploy --image gcr.io/PROJECT-ID/DOCKER --platform managed --region=asia-northeast1
Setting the Cloud Run deployment region prior to deployment with gcloud is covered in the documentation:
Optionally, set your platform and default Cloud Run region with the
gcloud properties to avoid prompts from the command line:
gcloud config set run/platform managed
gcloud config set run/region REGION
replacing REGION with the default region you want to use.
Related
Due to corporate restrictions, I'm supposed to host everything on GCP in Europe. The organisation I work for, has set a restriction policy to enforce this.
When I deploy a cloud run instance from source with gcloud beta run deploy --source . --region europe-west1 it seems the command tries to store the temporary files in a storage bucket in the us, which is not allowed. The command then throws a 412 error.
➜ gcloud beta run deploy cloudrun-service-name --source . --platform managed --region=europe-west1 --allow-unauthenticated
This command is equivalent to running `gcloud builds submit --tag [IMAGE] .` and `gcloud run deploy cloudrun-service-name --image [IMAGE]`
Building using Dockerfile and deploying container to Cloud Run service [cloudrun-service-name] in project [PROJECT_ID] region [europe-west1]
X Building and deploying new service... Uploading sources.
- Uploading sources...
. Building Container...
. Creating Revision...
. Routing traffic...
. Setting IAM Policy...
Deployment failed
ERROR: (gcloud.beta.run.deploy) HTTPError 412: 'us' violates constraint 'constraints/gcp.resourceLocations'
I see the Artifact Registry Repository being created in the correct region, but not the storage bucket.
To bypass this I have to create a storage bucket first in the correct region with the name PROJECT_ID_cloudbuild. Is there any other way to fix this?
Looking at the error message indicates that the bucket is forced to be created in the US regardless of the Organisation policy set in Europe. As per this public issue tracker comment,
“Cloud build submit creates a [PROJECT_ID]_cloudbuild bucket in the
US. This will of course not work when resource restrictions apply.
What you can do as a workaround is to create that bucket yourself in
another location. You should do this before your first cloud build
submit.”
This has been a known issue and I found two workarounds that can help you achieve what you want.
The first workaround is by using “gcloud builds submit” with additional flags:
Create a new bucket with the name [PROJECT_ID]_cloudbuild in the
preferred location.
Specify non-buckets using --gcs-source-staging-dir and
--gcs-log-dir 2 ===> this flag is required as if it is not set
it will create a bucket in the US.
The second workaround is by using a cloudbuild.yaml and the “--gcs-source-staging-dir” flag:
Create a bucket in the region, dual-region or multi-region you may
want
Create a cloudbuild.yaml for storing a build artifacts
You can find an example of the YAML file in the following external
documentation, please note that I cannot vouch for its accuracy
since it is not from GCP.
Run the command :
gcloud builds submit
--gcs-source-staging-dir="gs://example-bucket/cloudbuild-custom" --config cloudbuild.yaml
Please try these workarounds and let me know if it worked for you.
I need to run a specific gcloud SDK command. And I need to do it remotely on my express server. Is this possible?
The command is related to the Cloud CDN service, which doesn't seem to have an npm package to access its API in an easy way. I've noticed on a cloudbuild.yaml that you can actually run a gcloud command on a build process, like:
cloudbuild.yaml
- name: "gcr.io/google.com/cloudsdktool/cloud-sdk"
entrypoint: gcloud
args:
- "run"
- "deploy"
- "server"
And then I thought, if it's possible to run a gcloud command through Cloud Build, isn't there some way to create basically a "Cloud Script" where I could access and trigger a gcloud command, like that?
This is my environment and what I'd like to run:
Express server hosted on Cloud Run
I would like to run a command to clear the Cloud CDN cache, like this:
gcloud compute url-maps invalidate-cdn-cache URL_MAP_NAME \
--path "/images/*"
There doesn't seem to be a Node.js client API to access the Cloud CDN service.
Here you have a REST POST endpoint https://cloud.google.com/compute/docs/reference/rest/v1/urlMaps/invalidateCache
You can pretty much create a cloud function or call it from other places to invalidate your Cache.
With the gcloud command you would probably have to create VM on Compute Engine and create some endpoint which execute the gcloud command no other way but I suggest you use REST endpoint as you can call it from whatever environment you use.
I'm trying to create a Compute Engine VM instance sample in Google Cloud that has an associated startup script startup_script.sh. On startup, I would like to have access to files that I have stored in a Cloud Source Repository. As such, in this script, I clone a repository using
gcloud source repos clone <repo name> --project=<project name>
Additionally, startup_script.sh also runs commands such as
gcloud iam service-accounts keys create key.json --iam-account <account>
which creates .json credentials, and
EXTERNAL_IP = $(gcloud compute instances describe sample --format='get(networkInterfaces[0].accessConfigs[0].natIP)' --zone=us-central1-a)
to get the external IP of the VM within the VM. To run these commands without any errors, I found that I need partial or full access to multiple Cloud API access scopes.
If I manually edit the scopes of the VM after I've already created it to allow for this and restart it, startup_script.sh runs fine, i.e. I can see the results of each command completing successfully. However, I would like to assign these scopes upon creation of the VM and not have to manually edit scopes after the fact. I found in the documentation that in order to do this, I can run
gcloud compute instances create sample --image-family=ubuntu-1804-lts --image-project=ubuntu-os-cloud --metadata-from-file=startup-script=startup_script.sh --zone=us-central1-a --scopes=[cloud-platform, cloud-source-repos, default]
When I run this command in the Cloud Shell, however, I can either only add one scope at a time, i.e. --scopes=cloud_platform, or if I try to enter multiple scopes as shown in the command above, I get
ERROR: (gcloud.compute.instances.create) unrecognized arguments:
cloud-source-repos,
default]
Adding multiple scopes as the documentation suggests doesn't seem to work. I get a similar error when use the scope's URI instead of it's alias.
Any obvious reasons as to why this may be happening? I feel this may have to do with the service account (or lack thereof) associated with the sample VM, but I'm not entirely familiar with this.
BONUS: Ideally I would like to run the VM creation cloud shell command in a cloudbuild.yaml file, which I have as
steps:
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: gcloud
args: ['compute', 'instances', 'create', 'sample', '--image-family=ubuntu-1804-lts', '--image-project=ubuntu-os-cloud', '--metadata-from-file=startup-script=startup_sample.sh', '--zone=us-central1-a', '--scopes=[cloud-platform, cloud-source-repos, default]']
I can submit the build using
gcloud builds submit --config cloudbuild.yaml .
Are there any issues with the way I've setup this cloudbuild.yaml?
Adding multiple scopes as the documentation suggests doesn't seem to work
Please use the this command with --scopes=cloud-platform,cloud-source-reposCreated and not --scopes=[cloud-platform, cloud-source-repos, default]:
gcloud compute instances create sample --image-family=ubuntu-1804-lts --image-project=ubuntu-os-cloud --zone=us-central1-a --scopes=cloud-platform,cloud-source-reposCreated
[https://www.googleapis.com/compute/v1/projects/wave25-vladoi/zones/us-central1-a/instances/sample].
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
sample us-central1-a n1-standard-1 10.128.0.17 35.238.166.75 RUNNING
Also consider #John Hanley comment.
I want to use Cloud Build with a trigger on commit to automatically fetch updated repo and run sudo supervisorctl restart on a Compute Engine instance.
On the Cloud Build settings page, there is an option to connect Compute Engine, but so far I only found examples including Kubernetes Engine and App Engine here.
Is it possible to accomplish? Is it the right way to make updates? Or should I instead restart the instance(s) with a startup-script?
There's a repo in Github from the cloud-builders-community that may be what you are looking for.
As specified in the aforementioned link, it does connect cloud Build to Compute Engine with the following steps:
A temporary SSH key will be created in your Container Builder workspace
A instance will be launched with your configured flags
The workpace will be copied to the remote instance
Your command will be run inside that instance's workspace
The workspace will be copied back to your Container Builder workspace
You will need to create an appropriate IAM role with create and destroy Compute Engine permissions:
export PROJECT=$(gcloud info --format='value(config.project)')
export PROJECT_NUMBER=$(gcloud projects describe $PROJECT --format 'value(projectNumber)')
export CB_SA_EMAIL=$PROJECT_NUMBER#cloudbuild.gserviceaccount.com
gcloud services enable cloudbuild.googleapis.com
gcloud services enable compute.googleapis.com
gcloud projects add-iam-policy-binding $PROJECT --member=serviceAccount:$CB_SA_EMAIL --role='roles/iam.serviceAccountUser' --role='roles/compute.instanceAdmin.v1' --role='roles/iam.serviceAccountActor'
And then you can configure your build step with something similar to this:
steps:
- name: gcr.io/$PROJECT_ID/remote-builder
env:
- COMMAND=sudo supervisorctl restart
You can also find more information in the examples section of the Github repo.
I've updated gcloud to the latest version (159.0.0)
I created a Google Container Engine node, and then followed the instructions in the prompt.
gcloud container clusters get-credentials prod --zone us-west1-b --project myproject
Fetching cluster endpoint and auth data.
kubeconfig entry generated for prod
kubectl proxy
Unable to connect to the server: error executing access token command
"/Users/me/Code/google-cloud-sdk/bin/gcloud ": exit status
Any idea why is it not able to connect?
You can try to run to see if the config was generated correctly:
kubectl config view
I had a similar issue when trying to run kubectl commands on a new Kubernetes cluster just created on Google Cloud Platform.
The solution for my case was to activate Google Application Default Credentials.
You can find a link below on how to activate it.
Basically, you need to set an environmental variable to the path of the .json with the credentials from GCP
GOOGLE_APPLICATION_CREDENTIALS -> c:\...\..\..Credentials.json exported from Google Cloud
https://developers.google.com/identity/protocols/application-default-credentials
I found this solution on a kuberenetes github issue: https://github.com/kubernetes/kubernetes/issues/30617
PS: make sure you have also set the environmental variables for:
%HOME% to %USERPROFILE%
%KUBECONFIG% to %USERPROFILE%
It looks like the default auth plugin for GKE might be buggy on windows. kubectl is trying to run gcloud to get a token to authenticate to your cluster. If you run kubectl config view you can see the command it tried to run, and run it yourself to see if/why it fails.
As Alexandru said, a workaround is to use Google Application Default Credentials. Actually, gcloud container has built in support for doing this, which you can toggle by setting a property:
gcloud config set container/use_application_default_credentials true
or set environment variable
%CLOUDSDK_CONTAINER_USE_APPLICATION_DEFAULT_CREDENTIALS% to true.
Using GKE, update the credentials from the "Kubernetes Engine/Cluster" management worked for me.
The cluster line provides "Connect" button that copy the credentials commands into console. And this refresh the used token. And then kubectl works again.
Why my token expired? well, i suppose GCP token are not eternal.
So, the button plays the same command automatically that :
gcloud container clusters get-credentials your-cluster ...
Bruno